[Pkg-clamav-commits] [SCM] Debian repository for ClamAV branch, debian/unstable, updated. debian/0.95+dfsg-1-6156-g094ec9b

Török Edvin edwin at clamav.net
Sun Apr 4 01:20:45 UTC 2010


The following commit has been merged in the debian/unstable branch:
commit 91a09b94367a0699a247d0460583eda8af47dbd3
Author: Török Edvin <edwin at clamav.net>
Date:   Mon Feb 15 18:17:47 2010 +0200

    Update to LLVM upstream SVN r96221.
    
    Squashed commit of the following:
    
    commit b743e68144f4a59dac95dc80251fd794ba58e8d8
    Author: Oscar Fuentes <ofv at wanadoo.es>
    Date:   Mon Feb 15 15:17:05 2010 +0000
    
        CMake: Fixed syntax in conditional.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96221 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 011dfdfde2e50769b9f276ad614bd888b12fc7ae
    Author: Andrew Lenharth <alenhar2 at cs.uiuc.edu>
    Date:   Mon Feb 15 15:00:44 2010 +0000
    
        Fix changes from r75027
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96220 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d31ee026569a5f24c445267e016cf9910583ecd1
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 15 10:28:37 2010 +0000
    
        When testing whether a given SCEV depends on a temporary symbolic
        name, test whether the SCEV itself is that temporary symbolic name,
        in addition to checking whether the symbolic name appears as a
        possibly-indirect operand.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96216 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 657373b3503c19e66e2dd7a396f14b8986883f81
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 08:04:42 2010 +0000
    
        Check in the first big step of rewriting DAGISelEmitter to
        produce a table based matcher instead of gobs of C++ Code.
    
        Though it's not done yet, the shrinkage seems promising,
        the table for the X86 ISel is 75K and still has a lot of
        optimization to come (compare to the ~1.5M of .o generated
        the old way, much of which will go away).
    
        The code is currently disabled by default (the #if 0 in
        DAGISelEmitter.cpp).  When enabled it generates a dead
        SelectCode2 function in the DAGISel Header which will
        eventually replace SelectCode.
    
        There is still a lot of stuff left to do, which are
        documented with a trail of FIXMEs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96215 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0dc91ca186b2e7dd7c308c4e6cf7d3e7969ca5e3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 07:11:34 2010 +0000
    
        give SDValue an operator->, allowing V->isTargetOpcode() and
        many other natural things.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96214 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f02f4757dc20afc15f8a81a882b4f1ecefd00077
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 06:39:31 2010 +0000
    
        don't make insanely large node numbers for no reason,
        packing somewhat densely is better than not.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96213 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a4aac10ec660a06e22dce191128eb5024ab21518
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 06:38:41 2010 +0000
    
        no need to add the instruction count anymore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bba1bac8b95c78b92a4a72b5705e2a25713755f
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Mon Feb 15 03:17:06 2010 +0000
    
        Revert r96130 ("Forward parameter options as '-option=param'").
    
        This behaviour must be configurable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96210 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd9b922998ba89ea8f5d1189eace6e32789abaad
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 02:18:26 2010 +0000
    
        enhance raw_svector_ostream::write_impl to work with unbuffered streams,
        which may call write_impl on things that are not the usual buffer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96209 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 55386b01a4cecd00ced70506053c42a484377f97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 15 02:17:50 2010 +0000
    
        make PadToColumn return the stream so you can use:
         OS.PadToColumn(42) << "foo";
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96208 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2fb413bb51d0dee81a73bd9172f9143a34131f00
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Feb 15 01:45:47 2010 +0000
    
        Ignore DBG_VALUE in a couple more places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96207 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f26cfe1cb289c64d1b4837ee27cf8a82c807c52b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 15 00:21:43 2010 +0000
    
        When restoring a saved insert location, check to see if the saved
        insert location has become an "inserted" instruction since the time
        it was saved. If so, advance to the first non-"inserted" instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96203 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ace26149d15397912b33d27b2581d5c8152ff748
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 22:33:49 2010 +0000
    
        constize
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96199 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8205a02f245f50066f35d18b185b10f6991437a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 22:22:58 2010 +0000
    
        clean up a bunch of code, move some random predicates
        on TreePatternNode to be methods on TreePatternNode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96197 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7345cb780331e29aaf65aac22dd0fa47cb2ab3a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 21:53:19 2010 +0000
    
        mark "addr" as having type "iPTR", eliminating some type comparisons
        in hte generated dag isel fil.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96193 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d61d15ec8e1c357e5b95d2cb5165134f8a969d49
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 21:11:53 2010 +0000
    
        remove the DisablePatternForFastISel predicate, which is a check
        that predated -fast-isel which attempted to speed up the dag pattern
        matchers at -O0.  Since fast-isel is around, this is basically
        obsolete and removing it shrinks the generated dag isels.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96188 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5e3d5991f3236a4d47a10fed9b3a49fdac06e873
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 21:10:33 2010 +0000
    
        add an insertion operator.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96187 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9579a01585fd387c4d3d4dcc422b6adcec0d91d4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 21:10:15 2010 +0000
    
        tidy up
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99fca24dccf031de508b81747ae8370f86360845
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 18:51:39 2010 +0000
    
        Fix whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96179 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c406240f44cb131e976cb4d51b7ab83a7b0b0ed9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 18:51:20 2010 +0000
    
        Fix a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96178 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 82460b98c2e062fd5148427a9cb1cb78a4909769
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 18:50:49 2010 +0000
    
        When complicated expressions are broken down into subexpressions
        with multiplication by constants distributed through, occasionally
        those subexpressions can include both x and -x. For now, if this
        condition is discovered within LSR, just prune such cases away,
        as they won't be profitable. This fixes a "zero allocated in a
        base register" assertion failure.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96177 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 40742a3a3882348c797524c7c86a9ec5814725aa
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Sun Feb 14 18:27:42 2010 +0000
    
        fixes to pagesel/banksel inserter.
        1. restore these across direct/indirect calls.
        2. restore pagesel for any macros with gotos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33e8473580e0fbf1f3a69fdc43933f5587017f55
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Feb 14 18:25:41 2010 +0000
    
        Forgot to commit the header
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96174 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc2ace8a01bf1365ef8ee7b51c5c11ae8948fa0d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sun Feb 14 18:20:09 2010 +0000
    
        follow-on to PR6280
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96172 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47600ebabd031883ce4391d8b6cb731be4fdb52a
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sun Feb 14 15:19:54 2010 +0000
    
        Drop winmcasminfo and use normal AT&T COFF for all windows targets.
        Otherwise AT&T asm printer is used with non-compatible MCAsmInfo and
        there is no way to override this behaviour.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96165 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3efdcb0c01aaf4f8ce026b8951a1355c07ea83ac
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Sun Feb 14 06:32:20 2010 +0000
    
        Try to factorize the specification of saturating add/subtract operations a bit,
        as suggested by Bob Wilson.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96153 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a3f2908bb5c8bb7a7a431b46da88c0312fbf07f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 03:21:49 2010 +0000
    
        Actually, this code doesn't have to be quite so conservative in
        the no-TLI case. But it should still default to declining the
        transformation.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96152 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8bbd4650fd1deb087fc0d14ef7ddd75433a56f61
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 03:12:47 2010 +0000
    
        In rememberInstruction, if the value being remembered is the
        current insertion point, advance the current insertion point.
        This avoids a use-before-def situation in a testcase extracted
        from clang which is difficult to reduce to a reasonable-sized
        regression test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96151 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0c74bfa2a1ddd7a2ef248b0230fedf9876b8f575
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 02:48:58 2010 +0000
    
        Simplify this code; no need for a custom subclass if it doesn't need
        to override anything from the parent class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 74fca1b9ff661b978f7618e0728f73e0fdd54a6d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 02:47:26 2010 +0000
    
        Remove a 'protected' keyword, now that SCEVExpander is no longer
        intended to be subclassed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96149 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f264808f2cb4a8b38128c3a7ae2b4f1d879d3a4e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sun Feb 14 02:45:21 2010 +0000
    
        Don't attempt aggressive post-inc uses if TargetLowering is not available,
        because profitability can't be sufficiently approximated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96148 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2aed470b47f8328773c0ddb95cf580fd3925d934
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sun Feb 14 01:47:19 2010 +0000
    
        2.7: Note that DataTypes.h moved.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96143 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eead7ba8da9d823e3bc0a65749598cab9884a89f
    Author: John McCall <rjmccall at apple.com>
    Date:   Sat Feb 13 23:40:16 2010 +0000
    
        Make LSR not crash if invoked without target lowering info, e.g. if invoked
        from opt.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96135 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 44f9090da450cf9dacfb75992fd9aeb26f68fc4d
    Author: Eric Christopher <echristo at apple.com>
    Date:   Sat Feb 13 23:38:01 2010 +0000
    
        Fix a problem where we had bitcasted operands that gave us
        odd offsets since the bitcasted pointer size and the offset pointer
        size are going to be different types for the GEP vs base object.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96134 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8042b4e7edc9d542d2e58143943ffb3beaed9390
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Feb 13 22:37:28 2010 +0000
    
        Forward parameter options as '-option=parameter'.
    
        Some tools do not like the '-option parameter' form. Should this be
        configurable?
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92cd5e38ac6edcef4b749c69b80746c4259989a2
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Feb 13 22:37:13 2010 +0000
    
        Support some more Darwin-only options.
    
        We really need a conditional compilation mechanism...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96129 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 49f2808120c4d0f15687725809b0e536474e791a
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Feb 13 22:37:00 2010 +0000
    
        Support -mfix-and-continue properly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96128 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 50d6b85789129c236b23be735e1b32dd63abc8ca
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Sat Feb 13 22:36:43 2010 +0000
    
        Revert r94752, turns out we don't need to touch these options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96127 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f23debd520b74ad86a216ada0e01f7fd0ffc61e2
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 22:23:47 2010 +0000
    
        Trim trailing spaces (aka, trigger rebuild).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96126 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b0a6878e6c460e05f4b03c46bac57a7fa59a59c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 20:06:50 2010 +0000
    
        pull a bunch of huge inline methods in the PatternCodeEmitter
        class out of line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96113 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 663660c70218e3a2d5278c28c9565a818151c332
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 19:16:53 2010 +0000
    
        teach the encoder to handle pseudo instructions like FP_REG_KILL,
        encoding them into nothing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96110 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30b038f436af6b6fd4628abf88d49d40c23d0fc1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 19:07:06 2010 +0000
    
        remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e253572d2fad7b182b894cbc7289ffc3375560f6
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:45:59 2010 +0000
    
        MCAssembler: Fix pcrel relocations. Oh and,
        --
        ddunbar at ozzy:tmp$ clang -m32 -integrated-as hello.c && ./a.out
        hello world!
        --
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96096 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cf1ebe1dbfdd13d852f13bf8d6974f3f5d87a0d4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:29:02 2010 +0000
    
        MC/Mach-O: Start emitting fixups/relocations for instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96095 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc7fba3c3fed472e1ad8dc3e649dd60482500e98
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:54 2010 +0000
    
        MCAssembler: Switch MCAsmFixup to storing MCFixupKind instead of just a size.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96094 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f39f69797b8f84d4a40ab7ba02a93f55996444c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:43 2010 +0000
    
        MCAssembler: Sink fixup list into MCDataFragment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96093 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6e60008ce79947b3de0cfaf1db17750a162c93aa
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:32 2010 +0000
    
        MCAssembler: Switch MCFillFragment to only taking constant values. Symbolic expressions can always be emitted as data + fixups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96092 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 09b30df90d48aca2e01f3f79e5067796f221f058
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:22 2010 +0000
    
        MC/Mach-O: Implement EmitValue using data fragments + fixups instead of fill fragment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96091 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 290d22e0420d15aa4ea88760f8e3e4dacc97892f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:15 2010 +0000
    
        MCAssembler: Start applying fixups in the data section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96090 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb1e5f4b351ffa37946abf905337573a8dc0f70b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:28:03 2010 +0000
    
        MCAssembler: Add assorted dump() methods.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96089 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed2a7790092b11455a8595c78c875f5f975e4162
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 09:27:52 2010 +0000
    
        X86: Move extended MCFixupKinds into X86FixupKinds.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96088 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca1cd7f210bbbb36d3124bf4acc3a04631042752
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 05:35:08 2010 +0000
    
        Split some code out to a helper function (FindReusablePredBB)
        and add a doxygen comment.
    
        Cache the phi entry to avoid doing tons of
        PHINode::getBasicBlockIndex calls in the common case.
    
        On my insane testcase from re2c, this speeds up CGP from
        617.4s to 7.9s (78x).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96083 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdfc77eb6fbafeed46c78bc811fcc2209946c066
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 05:01:14 2010 +0000
    
        Speed up codegen prepare from 3.58s to 0.488s.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96081 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 74a9792208d00e0f761b05f81ffeafbace02c098
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 04:24:19 2010 +0000
    
        PHINode::getBasicBlockIndex is O(n) in the number of inputs
        to a PHI, avoid it in the common case where the BB occurs
        in the same index for multiple phis.  This speeds up CGP on
        an insane testcase from 8.35 to 3.58s.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96080 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b57f90e32d5fe848d9f74f11a28bc850bde69e87
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 04:15:26 2010 +0000
    
        iterate over preds using PHI information when available instead of
        using pred_begin/end.  It is much faster.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96079 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecdb218b1b4d1f8ab7bf8d17b0c9ec123e8cd566
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 04:04:42 2010 +0000
    
        speed up CGP a bit by scanning predecessors through phi operands
        instead of with pred_begin/end.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96078 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a17ee30ede09175873860c5dead5c87c4f998729
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 03:42:24 2010 +0000
    
        add encoder support and tests for rdtscp
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96076 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d12893a3644282505cc9646c5b105377b431f48a
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Sat Feb 13 02:51:09 2010 +0000
    
        Add SETEND and BXJ instructions for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96075 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 16468d9e438555645102fbb2778432bd35abd4c3
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Sat Feb 13 02:06:11 2010 +0000
    
        Added the rdtscp instruction to the x86 instruction
        tables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96073 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0cf36cd25850354adaf0f3edf3de938e0ec3bc80
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Feb 13 02:06:10 2010 +0000
    
        Fix PR6283.
    
        When coalescing with a physreg, remember to add imp-def and imp-kill when
        dealing with sub-registers.
    
        Also fix a related bug in VirtRegRewriter where substitutePhysReg may
        reallocate the operand list on an instruction and invalidate the reg_iterator.
        This can happen when a register is mentioned twice on the same instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96072 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f5f11fcbf30f78b7a5329a531e27f245db19c45
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Feb 13 02:06:02 2010 +0000
    
        Fix a pruning heuristic which implicitly assumed that SmallPtrSet is
        deterministically sorted.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96071 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eff3bca8f97276e92e90f8d56c9e4291a66c4341
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Feb 13 01:56:41 2010 +0000
    
        Teach MachineFrameInfo to track maximum alignment while stack objects are being
        created. This ensures it's updated at all time. It means targets which perform
        dynamic stack alignment would know whether it is required and whether frame
        pointer register cannot be made available register allocation.
        This is a fix for rdar://7625239. Sorry, I can't create a reasonably sized test
        case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96069 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e628ba996e2e2af6de56252cca6e53f426301b7
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Feb 13 01:51:53 2010 +0000
    
        Enable the inlinehint attribute in the Inliner.
    
        Functions explicitly marked inline will get an inlining threshold slightly
        more aggressive than the default for -O3. This means than -O3 builds are
        mostly unaffected while -Os builds will be a bit bigger and faster.
    
        The difference depends entirely on how many 'inline's are sprinkled on the
        source.
    
        In the CINT2006 suite, only these tests are significantly affected under -Os:
    
                       Size   Time
        471.omnetpp   +1.63% -1.85%
        473.astar     +4.01% -6.02%
        483.xalancbmk +4.60%  0.00%
    
        Note that 483.xalancbmk runs too quickly to give useful timing results.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96066 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d3a51e5f880151165ac69bba2515f328ed66515
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Sat Feb 13 01:48:34 2010 +0000
    
        Fixed encodings for invlpg, invept, and invvpid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96065 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58d70fd72375c42ccbb96705419ed1e13a44fdc7
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 01:28:07 2010 +0000
    
        MC/AsmParser: Attempt to constant fold expressions up-front. This ensures we avoid fixups for obvious cases like '-(16)'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96064 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbc39e6574315ebfefa022c80b089212f9bb796e
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Sat Feb 13 01:21:01 2010 +0000
    
        Added a bunch of saturating add/subtract instructions for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96063 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3116cb4e9495b87fdab0c951cc7d155f6b33b6c3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 00:49:29 2010 +0000
    
        rip out the 'heinous' x86 MCCodeEmitter implementation.
        We still have the templated X86 JIT emitter, *and* the
        almost-copy in X86InstrInfo for getting instruction sizes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96059 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7045e8c400a5496101069eb11da81da4a40ccad0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Sat Feb 13 00:41:14 2010 +0000
    
        remove special cases for vmlaunch, vmresume, vmxoff, and swapgs
        fix swapgs to be spelled right.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96058 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62b451b2ee84750519f2508d2945ed576a1cd034
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Feb 13 00:31:44 2010 +0000
    
        Besides removing phi cycles that reduce to a single value, also remove dead
        phi cycles.  Adjust a few tests to keep dead instructions from being optimized
        away.  This (together with my previous change for phi cycles) fixes Apple
        radar 7627077.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96057 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7f18489ed04021e50814b8b8232e2c303f5bd89
    Author: Dan Gohman <gohman at apple.com>
    Date:   Sat Feb 13 00:19:39 2010 +0000
    
        Override dominates and properlyDominates for SCEVAddRecExpr, as a
        SCEVAddRecExpr doesn't necessarily dominate blocks merely dominated
        by all of its operands. This fixes an abort compiling 403.gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 96b7ae4a9f1f68fd1cf9ec8c8bd828aa40fdf7c5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Feb 13 00:17:21 2010 +0000
    
        MC/X86: Push immediate operands as immediates not expressions when possible.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96055 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8587b0bf8eb4a2d867b2a0243b4a115221df45dd
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Sat Feb 13 00:03:17 2010 +0000
    
        Make PassRegistrar thread-safe since it can be modified by code running in
        separate LLVMContexts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96051 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11203379493732fa422589f5c9ff46e7ae4e17c3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 23:54:57 2010 +0000
    
        Remove special cases for [LM]FENCE, MONITOR and MWAIT from
        encoder and decoder by using new MRM_ forms.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96048 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8002ba66e96485214e3c57b43b4d6eb7706522d1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 23:46:48 2010 +0000
    
        add some disassemble testcases for weird instructions
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96045 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a37c9b5a95253eb24d2658d55adb1c73b605c915
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Feb 12 23:39:46 2010 +0000
    
        Reworked the Intel disassembler to support instructions
        whose opcodes extend into the ModR/M field using the
        Form field of the instruction rather than by special
        casing each instruction.  Commented out the special
        casing of VMCALL, which is the first instruction to use
        this special form.  While I was in the neighborhood,
        added a few comments for people modifying the Intel
        disassembler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96043 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62574fd778f9c1cd9b8e1404d1ad92daa0e8110f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 23:24:09 2010 +0000
    
        implement the rest of correct x86-64 encoder support for
        rip-relative addresses, and add a testcase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3aaba2b64692bb1b59451546fba55b4d06fe5df0
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Feb 12 23:16:24 2010 +0000
    
        Add the problem I just hacked around in 96015/96020.
        The solution there produces correct code, but is seriously
        deficient in several ways.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96039 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5978124b36300af6cb60dfd56c4200d4156eeb0f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 23:12:47 2010 +0000
    
        give MCCodeEmitters access to the current MCContext.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96038 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit faa7feddabee6e3f294785114ef824326268edf4
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Feb 12 23:05:31 2010 +0000
    
        Make JIT::runFunction clean up the generated stub function.
    
        Patch by Shivram K!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96037 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0609b1e6d69aa82937e6f54af73c92c6596252da
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 23:00:36 2010 +0000
    
        implement infrastructure to support fixups for rip-rel
        addressing.  This isn't complete because I need an MCContext
        to generate new MCExprs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2ceb228ee2b5482144e935cbe17928f60648c601
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 12 22:53:19 2010 +0000
    
        Add YIELD, WFE, WFI, and SEV instructions for disassembly only.
        Plus add two formats: MiscFrm and ThumbMiscFrm.  Some of the for disassembly
        only instructions are changed from Pseudo Format to MiscFrm Format.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96032 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 87ee5c9565c4362991e2fd99adf365f216574eff
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 22:47:55 2010 +0000
    
        pull the rip-relative addressing mode case up early.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96031 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4410c7506473b5ff6c075356eed07fca9c87ec26
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 22:39:06 2010 +0000
    
        fixme resolved!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96029 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da3abd11aead14b8b5812bb8a5dc7fff0687f399
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 22:36:47 2010 +0000
    
        start producing reloc_pcrel_4byte/reloc_pcrel_1byte for calls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96028 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c52c146a6c499821a37e4a260b17dcb65f46e2d1
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Feb 12 22:34:54 2010 +0000
    
        Fix a comment typo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96027 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f45b9dfd57cd29af5a286c95c67e7f8831e6599
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 22:27:07 2010 +0000
    
        enhance the immediate field encoding to know whether the immediate
        is pc relative or not, mark call and branches as pcrel.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96026 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7312910f142ffc20c8ed466df2808be6e6ffbc65
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 22:17:21 2010 +0000
    
        Load / store multiple instructions cannot load / store sp. Sorry, can't come up with a reasonable test case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96023 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78597677837983b2bd54345792c5727e5cb7df0d
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Feb 12 22:00:40 2010 +0000
    
        This should have gone in with 26015, see comments there.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96020 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 97637b840f772fd49bbb73c0ec32b578a30ddc59
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 12 21:59:23 2010 +0000
    
        Add halfword multiply accumulate long SMLALBB/BT/TB/TT for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96019 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0bcde6783ba1ef149066bf5f226ea3b5bb204a62
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 21:54:28 2010 +0000
    
        doxygenize some comments, patch by Peter Collingbourne!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96018 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9ccefd1972a73e609d2a4460ef24c4d1277332f
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Feb 12 21:35:34 2010 +0000
    
        When save/restoring CR at prolog/epilog, in a large
        stack frame, the prolog/epilog code was using the same
        register for the copy of CR and the address of the save slot.  Oops.
        This is fixed here for Darwin, sort of, by reserving R2 for this case.
        A better way would be to do the store before the decrement of SP,
        which is safe on Darwin due to the red zone.
    
        SVR4 probably has the same problem, but I don't know how to fix it;
        there is no red zone and R2 is already used for something else.
        I'm going to leave it to someone interested in that target.
    
        Better still would be to rewrite the CR-saving code completely;
        spilling each CR subregister individually is horrible code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96015 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 964eda1b7e6d2c274537d6887598d94afc39cb86
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 20:49:41 2010 +0000
    
        Add support for a union type in LLVM IR.  Patch by Talin!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d4b966df8f910f36c5d8e02befd5b8da5c39038
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 12 20:48:24 2010 +0000
    
        Add SWP (Swap) and SWPB (Swap Byte) for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96010 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a477ca64a2ff06c295fc5d79a4ae847044964764
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 20:39:35 2010 +0000
    
        Also recognize armv6t2-* and armv5te-* triplets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96008 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1abb17f23368025054fac2a266293c52308413c7
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 12 20:39:25 2010 +0000
    
        Fix a case of mismatched types in an Add that turned up in 447.dealII.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96007 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db0084c8abbc02963d7087b1b89ed3ecb6bbc5d3
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 20:13:44 2010 +0000
    
        Add ARM bitcode file magic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96006 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b20bd310eda5e159d28993229dabb4f6e481102c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 12 19:35:25 2010 +0000
    
        Reapply 95979, a compile-time speedup, now that the bug it exposed is fixed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96005 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 91e7b9265d247c5206860c9655fdee62d16a8512
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 12 19:20:37 2010 +0000
    
        Fix this code to avoid dereferencing an end() iterator in
        offset distributions it doesn't expect.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@96002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f012c07c5e30ab36e1fc81c6e6a27f19dba4fd44
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 12 18:55:33 2010 +0000
    
        Add CPS, MRS, MRSsys, MSR, MSRsys for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95999 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0abe9e7645ab2f98ceafea4644c210d270a3b5d2
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Feb 12 18:40:17 2010 +0000
    
        Rewrite handling of DBG_VALUE; previous algorithm
        didn't handle
        X =
        Y<dead> = use X
        DBG_VALUE(X)
        I was hoping to avoid this approach as it's slower,
        but I don't think it can be done.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95996 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e92534406bed8cb0d869cf6f03bf0a0b5e8967e4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 18:17:23 2010 +0000
    
        1. modernize the constantmerge pass, using densemap/smallvector.
        2. don't bother trying to merge globals in non-default sections,
           doing so is quite dubious at best anyway.
        3. fix a bug reported by Arnaud de Grandmaison where we'd try to
           merge two globals in different address spaces.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a996e79bdf9a1e3c94c87657c639ea3a4ff2a3cc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 18:05:00 2010 +0000
    
        rename test
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95993 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ac9e9e4b79e63595606f0c2a1f956b460c1529b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Feb 12 17:27:08 2010 +0000
    
        Revert "Reverse the order for collecting the parts of an addrec. The order", it
        is breaking llvm-gcc bootstrap.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95988 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d685a4fab6fd002bfd297096e3571df4deab415
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Feb 12 15:29:13 2010 +0000
    
        Testcases for recent stdcall / fastcall mangling improvements
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95982 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 412cd99612db589101a9a01cf76109f517045a77
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Feb 12 15:28:56 2010 +0000
    
        Setup correct data layout to match gcc's expectations on mingw32.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95981 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 24d33f599734b757d8574dbacbbfb7d09f43618d
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Fri Feb 12 15:28:40 2010 +0000
    
        Cleanup stdcall / fastcall name mangling.
        This should fix alot of problems we saw so far, e.g. PRs 5851 & 2936
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95980 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8ed2ebf5ac86bfd799d9bdc66f99abe73a96620
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 12 11:08:26 2010 +0000
    
        Reverse the order for collecting the parts of an addrec. The order
        doesn't matter, except that ScalarEvolution tends to need less time
        to fold the results this way.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95979 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5a607b8949e73a508c26771e26f7c30b7cd26c0c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 12 10:34:29 2010 +0000
    
        Reapply the new LoopStrengthReduction code, with compile time and
        bug fixes, and with improved heuristics for analyzing foreign-loop
        addrecs.
    
        This change also flattens IVUsers, eliminating the stride-oriented
        groupings, which makes it easier to work with.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca41a7d6ca95108aa285a499434a85e65830b200
    Author: Lang Hames <lhames at gmail.com>
    Date:   Fri Feb 12 09:43:37 2010 +0000
    
        * Updated the cost matrix normalization proceedure to better handle infinite costs.
        * Enabled R1/R2 application for nodes with infinite spill costs in the Briggs heuristic (made
        safe by the changes to the normalization proceedure).
        * Removed a redundant header.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 864d3c380bfe2a46ca0bb1e45a77232723a040d0
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 07:48:46 2010 +0000
    
        Update test to match 95961.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95971 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 253134e443bed0a1abbac39f1159b43103805680
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 02:35:03 2010 +0000
    
        Test for 95961.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95962 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2563a3d8062d011dc92c68bfab12d8aadb96cb14
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 02:06:33 2010 +0000
    
        add a bunch of mod/rm encoding types for fixed mod/rm bytes.
        This will work better for the disassembler for modeling things
        like lfence/monitor/vmcall etc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95960 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a11f1f613938c87def9975d1d02d24ab8e031db2
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 12 02:02:23 2010 +0000
    
        Test case for 95958.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95959 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea6d61133cdc7b23708456c2f39b58f2999e8eeb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 01:55:31 2010 +0000
    
        revert r95949, it turns out that adding new prefixes is not a
        great solution for the disassembler, we'll go with "plan b".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95957 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a6e257ddfe666a2a5f2d3c785ca0db1bc464ce55
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Feb 12 01:46:54 2010 +0000
    
        MC: Fix bug where trailing tied operands were forgotten; the X86 assembler
        matcher is now free of implicit operands!
         - Still need to clean up the code now that we don't to worry about implicit
           operands, and to make it a hard error if an instruction fails to specify all
           of its operands for some reason.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95956 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dc6dcfca28dededf4dacff3f810655dafe5f0fc1
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 12 01:44:23 2010 +0000
    
        Added coprocessor Instructions CDP, CDP2, MCR, MCR2, MRC, MRC2, MCRR, MCRR2,
        MRRC, MRRc2.  For disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95955 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0267dc4352974ba2cc5d21eb434570c7e11b6cde
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Feb 12 01:30:21 2010 +0000
    
        Add a new pass on machine instructions to optimize away PHI cycles that
        reduce down to a single value.  InstCombine already does this transformation
        but DAG legalization may introduce new opportunities.  This has turned out to
        be important for ARM where 64-bit values are split up during type legalization:
        InstCombine is not able to remove the PHI cycles on the 64-bit values but
        the separate 32-bit values can be optimized.  I measured the compile time
        impact of this (running llc on 176.gcc) and it was not significant.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95951 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dd9673a09a416b6f443c8191f7e4aa5272ab36c3
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Feb 12 01:22:03 2010 +0000
    
        X86: Fix definition for RCL/RCR.*m? operations -- they were getting represented
        with "tied memory operands", which is wrong.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c70ef3e2792a0a37e3517998f8aa6a7f3e9781df
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 01:15:16 2010 +0000
    
        add another bit of space for new kinds of instruction prefixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95949 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3629bdb8b22324c2bfe33c036fa8cab10511548d
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Fri Feb 12 01:10:45 2010 +0000
    
        Add a missing pattern for movhps so that we get:
    
        movq	(%ecx,%edx,2), %xmm2
        movhps	(%ecx,%eax,2), %xmm2
    
        rather than:
    
        movq     (%eax, %edx, 2), %xmm2
        movq     (%eax, %ebx, 2), %xmm3
        movlhps  %xmm3, %xmm2
    
        Testcase forthcoming.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95948 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76b82cb12274539b254a04a0423006b1a8584844
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 01:06:22 2010 +0000
    
        fix the encodings of monitor and mwait, which were completely
        busted in both encoders.  I'm not bothering to fix it in the
        old one at this point.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95947 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30d51dce30851e64cdeb1e6136d6f6d5d879a4db
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 12 00:37:46 2010 +0000
    
        improve support for minix, PR6280, patch by
        Kees van Reeuwijk!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95946 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7e5dc6e9bdeefbc238c336bfba82bc403b8b69ac
    Author: Charles Davis <cdavis at mines.edu>
    Date:   Fri Feb 12 00:31:15 2010 +0000
    
        Add a new function attribute, 'alignstack'. It will indicate (when the backends
        implement support for it) that the stack should be forcibly realigned in the
        prologue (and the process reversed in the epilogue).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95945 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c33bef344039f91e8bdc63766252b3c480b6a67a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 23:55:29 2010 +0000
    
        Reapply coalescer fix for better cross-class coalescing.
    
        This time with fixed test cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95938 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af2b332ed86806c193669dcac77961b2d945b9a3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 22:57:32 2010 +0000
    
        enhance llvm-mc -show-inst to print the enum of an instruction, like so:
    
        	testb	%al, %al                ## <MCInst #2412 TEST8rr
                                                ##   <MCOperand Reg:2>
                                                ##   <MCOperand Reg:2>>
        	jne	LBB1_7                  ## <MCInst #938 JNE_1
                                                ##   <MCOperand Expr:(LBB1_7)>>
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95935 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68316b7ed1283ed43ef1814d51a3666ae9c30ced
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 22:39:10 2010 +0000
    
        add a new MCInstPrinter::getOpcodeName interface, when it is
        implemented, llvm-mc --show-inst now uses it to print the
        instruction opcode as well as the number.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95929 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13f9209eeea6b78a89e14da0a400c7281984cc1a
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Thu Feb 11 21:51:51 2010 +0000
    
        Document binutils requirements for coff targets (cygwin / mingw32).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95928 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ef7bd26055041eee137b3b732377e8ec7843b8c8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 21:45:31 2010 +0000
    
        improve encoding information for branches.  We now know they have
        8 or 32-bit immediates, which allows the new encoder to handle
        them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95927 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d0ac4072239b9aea6e993b67470d675c94b497f
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Feb 11 21:29:46 2010 +0000
    
        MC: Move assembler-backend's fixup list into the fragment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95926 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94ad6823bc3bb8e2ce5d285bb8258c6129df8e0c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Thu Feb 11 21:29:29 2010 +0000
    
        MC: Move MCSectionData::Fixup out to MCAsmFixup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95925 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 252990b75fb84cd3b8005a8cd2abcdd1e3a7ebcb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 21:27:18 2010 +0000
    
        make getFixupKindInfo return a const reference, allowing
        the tables to be const.  Teach MCCodeEmitter to handle
        the target-indep kinds so that we don't crash on them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95924 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e84ffbd5af615af4b4326b4a5937d8e7725fa875
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 21:19:44 2010 +0000
    
        Revert functional change. This broke a bunch of tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95921 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf922f08f3f3dba6d8f6204270cc9d4df30c3dc3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 21:17:54 2010 +0000
    
        switch to target-indep fixups for 1/2/4/8 byte data.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95920 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a62b13e4334c3dd03f90db7e9212c5629455cc4a
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Feb 11 20:58:56 2010 +0000
    
        revert 95903.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95918 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bde99e7657bb11b7edf9fcf35467c9497abfd6f8
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 20:58:45 2010 +0000
    
        It is always good to do a cross-class join when the large register has a tiny interval.
    
        Also avoid division by zero.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95917 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 892cf047c170383369f88f936e03607b87f2e4c3
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Feb 11 20:31:08 2010 +0000
    
        Added LDRT/LDRBT/STRT/STRBT for disassembly only.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95916 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 021109ade238fd0e25cf8c69f7c367a461ea03ae
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 19:52:11 2010 +0000
    
        unbreak the build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95915 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 34b92a3f30c7e01062f7d7eee5db972a1b1208c8
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Feb 11 19:35:26 2010 +0000
    
        llvm-db was removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95904 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee2e10cf787b7c935085d156343ee9b48033bc86
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Feb 11 19:35:10 2010 +0000
    
        Destroy MDNodes while destructing llvm context.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95903 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9f95edce8035bbea61d68c4fd373e46ae2cd493
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 19:31:22 2010 +0000
    
        refactor x86 conditional branches to use a multipattern
        that generates the 1-byte and 4-byte immediate versions
        from one definition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95902 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c9872327581faf7d85aad5e570973c6f359d77c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 19:25:55 2010 +0000
    
        refactor the conditional jump instructions in the .td file to
        use a multipattern that generates both the 1-byte and 4-byte
        versions from the same defm
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95901 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1ef56bd34602273702b98cbb28c6a3c589ca70b
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Feb 11 19:15:20 2010 +0000
    
        Make Kaleidoscope not link against the interpreter, since that didn't
        work anyway (Interpreter::getPointerToFunction doesn't return a
        callable pointer), and improve the error message when an
        ExecutionEngine can't be created.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95896 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df0649adf48fa3343dc1ec95975259777b87b45f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Feb 11 19:07:04 2010 +0000
    
        Add an svn:ignore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95895 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98c5eed093b2f21580925541ff971003be227939
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Feb 11 18:47:03 2010 +0000
    
        Forgot to also check in this file for vcvt (floating-point <-> fixed-point, VFP).
        Sorry!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95892 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 75f18b48f297a4f31af3d70dcacb6601558d174e
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Thu Feb 11 18:23:23 2010 +0000
    
        Allow for more than one DBG_VALUE targeting the
        same dead instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95890 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 177849e66e24ccca09fbbd282eaa4f5ed0904de7
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Thu Feb 11 18:22:31 2010 +0000
    
        Don't allow DBG_VALUE to affect codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95889 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56c787f5c53084f7283a8fe47711db65d167c9a5
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Feb 11 18:17:16 2010 +0000
    
        Added VCVT (between floating-point and fixed-point, VFP) for disassembly.
        A8.6.297
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95885 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31a24b15aa460aa543772be3b415f06432630506
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Feb 11 18:12:29 2010 +0000
    
        Added BKPT/tBKPT (breakpoint) to the instruction table for disassembly purpose.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95884 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d7cf68f307458f6cc214734685f361f530893b4a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 18:06:56 2010 +0000
    
        Use array_pod_sort instead of std::sort for improved code size.
    
        Use SmallVector instead of std::vector for better speed when indirectbr has
        few successors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95879 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a058c4604c121efbe8d8783c17d1b20605dc938
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Feb 11 17:44:04 2010 +0000
    
        Make sure that ConstantExpr offsets also aren't off of extern
        symbols.
    
        Thanks to Duncan Sands for the testcase!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95877 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 310f24fcfcf54e6cf1e6ca350b7b431b1c474c51
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Thu Feb 11 17:14:31 2010 +0000
    
        Add pseudo instruction TRAP for disassembly, which is encoded according to A5-21
        as the "Permanently UNDEFINED" instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 62b33dc015061dc780d345bf5c2e47c3b3c27051
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Feb 11 10:37:57 2010 +0000
    
        Use .empty() instead of .size().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95871 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d38bd5ececc6c84836b8ee52d809e2d6e450152a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 08:45:56 2010 +0000
    
        dont' call getX86RegNum on X86::RIP, it doesn't like that.  This
        fixes the remaining x86-64 jit failures afaik.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95867 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 46a26bbf505cc07b5e51d2d7aab9827181cbf9d3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 08:41:21 2010 +0000
    
        fix a really nasty bug I introduced in r95693: r12 (and r12d,
        r12b, etc) also encodes to a R/M value of 4, which is just
        as illegal as ESP/RSP for the non-sib version an address.
    
        This fixes x86-64 jit miscompilations of a bunch of programs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 321617629bf8b7cec0b12c591acff40ccc278784
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Feb 11 07:16:13 2010 +0000
    
        Fix (harmless) memory leak found by memcheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95862 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ebce12bbcf2508514afc2fa55c7580d46e7e3afe
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 07:06:31 2010 +0000
    
        Add and commonize encoder support for all immediates.
        Stub out some dummy fixups to make things work.
    
        We can now emit fixups like this:
        	subl	$20, %esp               ## encoding: [0x83,0xec,A]
                                                ##   fixup A - offset: 2, value: 20, kind: fixup_1byte_imm
    
        Emitting $20 as a single-byte fixup to be later resolved
        by the assembler is ridiculous of course (vs just emitting
        the byte) but this is a failure of the matcher, which
        should be producing an imm of 20, not an MCExpr of 20.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95860 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f80c505d83787296cffb3e7495d3b8b7dfc35794
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 06:54:23 2010 +0000
    
        generalize EmitDisplacementField to work with any size
        and rename it to EmitImmediate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 97374c6e6a29b7dff39632a1b8dc96ee86e44c1a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 06:51:36 2010 +0000
    
        eliminate the dead IsPCRel argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95858 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e07afd152a07b1c6c1d144646f4869e9533dc33
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 06:49:52 2010 +0000
    
        eliminate the dead "PCAdj" logic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 77c7077e8f39f2efa8a74091743af08db20833f4
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Feb 11 06:41:30 2010 +0000
    
        Fix some of the memcheck errors found in the JIT unittests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95856 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f3e09483b108ba4fa6ee06a1ca9540267d7d21b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 06:26:33 2010 +0000
    
        Rename ValueRequiresCast to ShouldOptimizeCast, to better reflect
        what it does.  Enhance it to return false to optimizing vector
        sign extensions from vector comparisions, which is the idiom used
        to get a splatted vector for a vector comparison.
    
        Doing this breaks vector-casts.ll, add some compensating
        transformations to handle the important case they cover without
        depending on this canonicalization.
    
        This fixes rdar://7434900 a serious pessimization of vector compares.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95855 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48df834a4a87a035ff4311057ad8fd2250415249
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 06:24:37 2010 +0000
    
        convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95854 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f05a4a82b719a1bd583cdd7739079504d9963422
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 05:11:54 2010 +0000
    
        Make DSE only scan blocks that are reachable from the entry
        block.  Other blocks may have pointer cycles that will crash
        basicaa and other alias analyses.  In any case, there is no
        point wasting cycles optimizing dead blocks.  This fixes
        rdar://7635088
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95852 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9211ea57815c52083d4bd54c2f40fe1cbee1744f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 05:08:05 2010 +0000
    
        a testcase that doesn't crash GVN but could someday.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95851 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 089ed82c48bd69fe4a98ab3ef8e37ae26eabd9a1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 11 04:40:44 2010 +0000
    
        Make jump threading honor x|undef -> true and x&undef -> false,
        instead of considering x|undef -> x, which may not be true.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 042109e6237b5fc008570e8adbb6ba445fb6120e
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Feb 11 01:48:54 2010 +0000
    
        Add ConstantExpr handling to Intrinsic::objectsize lowering.
    
        Update testcase accordingly now that we can optimize another
        section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95846 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 65944ae3e1b2fdd6b8966b5af084f9607b34999d
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Feb 11 01:31:01 2010 +0000
    
        test case for r95842.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95844 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 459468d4434427a26cbc0fe5f79e4c31fbaf2795
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Feb 11 01:15:27 2010 +0000
    
        Fix to get it to compile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4880f4c56524735b593e4a852b584e101c80a54
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Feb 11 01:13:02 2010 +0000
    
        Don't print out a default newline when emitting the section offset. There are
        almost always comments afterwards that need printing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95839 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3caf596efddb31aa4998bf8f9bbea09071198c2
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Feb 11 01:07:39 2010 +0000
    
        Make it possible to create multiple JIT instances at the same time, by removing
        the global TheJIT and TheJITResolver variables.  Lazy compilation is supported
        by a global map from a stub address to the JITResolver that knows how to
        compile it.
    
        Patch by Olivier Meurant!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95837 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5d1bc4945f66b39363f5adb70d652280d71f796
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 00:34:33 2010 +0000
    
        Reuse operand location when updating PHI instructions.
    
        Calling RemoveOperand is very expensive on huge PHI instructions. This makes
        early tail duplication run twice as fast on the Firefox JavaScript
        interpreter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95832 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 983620a4957887088e3a54edf80d4a52789d7de3
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 11 00:34:18 2010 +0000
    
        Remove duplicate successors from indirectbr instructions before building the machine CFG.
    
        This makes early tail duplication run 60 times faster when compiling the Firefox
        JavaScript interpreter, see PR6186.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95831 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01dd2eed529c40aaee21eb3ce179e12cad1f36d9
    Author: Devang Patel <dpatel at apple.com>
    Date:   Thu Feb 11 00:20:49 2010 +0000
    
        Ignore dbg info intrinsics.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95828 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bddbb3549a3e706e1e48f4069b0820872b4bfe49
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Thu Feb 11 00:18:12 2010 +0000
    
        Remove the few # TAILCALL comments that snuck in.  As they may fail on linux.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95827 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9d6893474485fb9d42fb67f566494bf8119184b
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Thu Feb 11 00:13:43 2010 +0000
    
        Update the X86 assembler matcher test case now that a few more things match
        with some of the recent changes that have gone into llvm-mc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95826 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3125ea6e98852dab38710e6c9f4eb1fe31fec216
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 23:58:53 2010 +0000
    
        Add support to llvm-extract for extracting multiple functions and/or
        multiple global variables at a time.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95825 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c0275b0f91c1ea26942e6178dd62b0247b4252d
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Wed Feb 10 23:37:45 2010 +0000
    
        The previous fix of widening divides that trap was too fragile as it depends on custom
        lowering and requires that certain types exist in ValueTypes.h.  Modified widening to
        check if an op can trap and if so, the widening algorithm will apply only the op on
        the defined elements.  It is safer to do this in widening because the optimizer can't
        guarantee removing unused ops in some cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95823 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a4f4b1506d2a7bd676adcd886d98c12012bdd952
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 23:04:09 2010 +0000
    
        Ignore debug info one more place during coalescing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95819 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0df83f4b11cb2e8b6f793a1425b77f9b6cd5e19b
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 23:03:20 2010 +0000
    
        Allow isDebug inquiry on any MO.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95818 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1ca197830a1a1906a5657c9e9c52977510106e39
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Feb 10 22:58:57 2010 +0000
    
        Delete dead PHI machine instructions.  These can be created due to type
        legalization even when the IR-level optimizer has removed dead phis, such
        as when the high half of an i64 value is unused on a 32-bit target.
        I had to adjust a few test cases that had dead phis.
        This is a partial fix for Radar 7627077.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95816 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b0ea846f73eb83706dd80d1fbde28fe3eeed217
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 21:47:48 2010 +0000
    
        Skip debug info in a couple of places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95814 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3393b63d1a2483a7b7a2dd23e20cb16ea43c210
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Feb 10 21:41:57 2010 +0000
    
        Use an index instead of pointers into the vector. If the vector resizes, then
        the pointer values could be invalid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95813 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4dae4593c8310d09e587a928c15d3e7b32172037
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 21:41:41 2010 +0000
    
        When I rewrote this loop per Chris' preference I
        changed its behavior.  Oops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95811 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e824fb17b264b0696b236304671ddb4ccb50c895
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 21:37:31 2010 +0000
    
        add a virtual dtor to MCTargetExpr, hopefully silencing some warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95810 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b9161cf664d44a0c0789a2ea7e439b81f02d83f
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Wed Feb 10 21:26:04 2010 +0000
    
        A few missed optimizations; the last one could have a significant impact on
        code with lots of bitfields.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95809 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1196f9ba64661542cc87fcd99b96655874c057d4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 21:22:51 2010 +0000
    
        work around a gcc bug with -Wuninitialized.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95808 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4343998c96075c771bc1b8c486cd4d141f4ddf9f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Feb 10 21:19:56 2010 +0000
    
        Strip new llvm.dbg.value intrinsic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c7b65e886227bf3cbff010868d91c22daec111b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 21:19:28 2010 +0000
    
        MC/X86 AsmMatcher: Fix a use after free spotted by d0k, and de-XFAIL
        x86_32-encoding.s in on expectation of it passing.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95806 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 06e24dc5f9ea9603d282f77e327fff27748c8e95
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 21:01:04 2010 +0000
    
        XFAIL this on linux until I figure out what is happening.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63bb9e8ff151f1d947682100594e794b646a6646
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 21:00:55 2010 +0000
    
        lit: Ignore dot files when scanning for tests (e.g., editor temprary files,
        etc.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95803 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20631dbca24c6895ac647923fa3742fe94e7b3a7
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 21:00:47 2010 +0000
    
        MC/AsmMatcher: Tweak conversion function name.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95802 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 013ca607f6404a26921689e0e2936f9eeff85891
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 20:42:57 2010 +0000
    
        Minor whitespace cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95801 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3eeaacd71c0b4925eef44b315fa41d16588f2fad
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 20:42:37 2010 +0000
    
        Use an AssemblyAnnotatorWriter to clean up IVUsers' debug output.
        The "uses=" comments are just clutter in this context.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95799 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d73878ab5df6dcade0aab73f789b9173e0a21a3e
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 20:41:46 2010 +0000
    
        Add a hook to AssemblyAnnotationWriter to allow custom info comments
        to be printed, in place of the familiar "uses=" comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95798 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 50d20e115bc9907aa170fd14b720a17c002e3db6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 20:23:33 2010 +0000
    
        Use doxygen comment syntax.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95797 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1cb9e26a4f8e5f3037fe24dff0780cfb0c8acae0
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 20:04:19 2010 +0000
    
        Fix several comments which had previously been "the the" where a
        different word was intended.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6cd9da8b837fbb130cfb68bdfaf307f069838a6a
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Wed Feb 10 19:13:56 2010 +0000
    
        Replace this file containing 4 tests of x86 32-bit encodings with a file
        containing the subset of the full auto generated test case that currently
        encodes correctly.  Again it is useful as we bring up the the new encoder
        to make sure currently working stuff stays working.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95791 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e91762c19cff323979775171efd78ce52cbd6f2
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Wed Feb 10 18:02:25 2010 +0000
    
        Added NOP, DBG, SVC to the instruction table for disassembly purpose.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95784 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit af23afb740065a6c0c12f1d2fbc88eb3f18036bd
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 16:03:48 2010 +0000
    
        Fix "the the" and similar typos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95781 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 349b2a1b5c6c41646b8f04bf96ed0383b5607602
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 15:54:22 2010 +0000
    
        Minor code simplification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95780 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a960e5e14330ff0f2791bd69fb3576813a5cea7a
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Feb 10 13:34:02 2010 +0000
    
        Silence GCC warnings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95779 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 67f1b0e981f324a66a7e6713a20cd6758ce57633
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 08:15:48 2010 +0000
    
        MC/AsmMatcher: Add support for creating tied operands when constructing MCInsts.
         - Pretty messy, but we need to rework how we handle tied operands in MCInst
           anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95774 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f35eb9fa274c8e1ca4cd0c349cfb780aefda3758
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 06:52:12 2010 +0000
    
        emit some simple (and probably incorrect) fixups for symbolic
        displacement values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95773 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 64f2e32ed184eb327d7dca782a2f34e0e4ed6e0f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 06:41:02 2010 +0000
    
        keep track of what the current byte being emitted is
        throughout the X86 encoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 52a804e27005885de0e26cc29ecdbfda4389bc17
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 06:30:00 2010 +0000
    
        simplify displacement handling, emit displacements by-operand
        even for the immediate case.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95770 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb024ff9f39712daa425d097cd3d497e2973087f
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 06:13:07 2010 +0000
    
        Canonicalize sizeof and alignof on pointer types to a canonical
        pointer type.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95769 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d637aba529b5cc6a8dedbddfc187b3cd6ef4ccde
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 10 05:54:04 2010 +0000
    
        Implement operators |=, &=, and ^= for SmallBitVector, and remove the
        restriction in BitVector for |= and ^= that the operand must be the
        same length.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95768 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2c7ab5dec64227e63d7417a99abe57324ae84436
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 04:47:08 2010 +0000
    
        MC: Switch MCFixup to just hold an MCExpr pointer instead of index into the
        MCInst it came from.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95767 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5336c8961a2c3379da09d95ee4d7903fd04f7ad8
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 04:46:51 2010 +0000
    
        Fix a signed comparison warning.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95766 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbeaaa6b9892186196acaa7576334eec63a738e5
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 04:10:10 2010 +0000
    
        Remove stray DOS newline.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 252efd4f274ba896f82b785774973941198a79c9
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 04:09:52 2010 +0000
    
        Add a ReleaseNotes FIXME.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 99fff3e301012321a0a0a7db660bfc6ca6852a38
    Author: Garrison Venn <gvenn.cfe.dev at gmail.com>
    Date:   Wed Feb 10 03:38:29 2010 +0000
    
        Prevented build on WINDOWS using default make system. Stopped WINDOWS build
        at eh llvm/examples level using if check on LLVM_ON_UNIX.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95763 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07663f302606e6b2112b4fde8db8c1c76bb97c11
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Feb 10 03:23:23 2010 +0000
    
        Updated the enhanced disassembly library's TableGen
        backend to not use exceptions at all except in cases
        of actual error.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95762 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f327869a5de5fb01a4c42cc789ab97663fc31762
    Author: Garrison Venn <gvenn.cfe.dev at gmail.com>
    Date:   Wed Feb 10 02:50:08 2010 +0000
    
        Prevented ExceptionDemo example being built on WINDOWS via if( NOT WIN32 )
        check in examples cmake list file. This has NOT been tested.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95761 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 096b097f4a245dd6e043ffbff4d574800f4768c4
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Feb 10 02:47:08 2010 +0000
    
        Updated the TableGen emitter for the Enhanced
        Disassembler to take advantage of the refactored
        AsmWriterInst.h.  Note removed parser code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95760 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 605a402cc2a7ef412ad4144a2de45d68f7b65dcb
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Feb 10 02:27:43 2010 +0000
    
        Changed AsmWriterOperand to also include the index of the
        operand into the CodeGenInstruction's list of operands,
        which is useful for EDEmitter.  (Still working on PR6219)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95759 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8f028b8bce6e92e12809beac0e0379b601980a4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 10 02:17:34 2010 +0000
    
        Now that ShrinkDemandedOps() is separated out from DAG combine. It sometimes leave some obvious nops which dag combine used to clean up afterwards e.g. (trunk (ext n)) -> n. Look for them and squash them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95757 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a7a939aeea7bbbadddde5c0491621dbe1059c00d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:46:47 2010 +0000
    
        "fixup" a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95754 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 33fe554b0bb1dd7fc2ee9c9e5d7afae9233d652c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:45:28 2010 +0000
    
        Introduce a new CodeGenInstruction::ConstraintInfo class
        for representing constraint info semantically instead of
        as a c expression that will be blatted out to the .inc
        file.  Fix X86RecognizableInstr to use this instead of
        parsing C code :).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95753 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b6129a289288b6bd0d237dece7ccc494a01bb59e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 10 01:41:14 2010 +0000
    
        llvm-mc: Remove --show-fixups and always show as part of --show-encoding.
    
        Also, fix a silly memory leak.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf8fe6a7fc7a6722143d6025007aaab01a3b1526
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 01:31:26 2010 +0000
    
        Rewrite loop to suit Chris' preference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95749 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c3f58b6ce58d2287a291f0c82993b461c0cc5bc6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:23:18 2010 +0000
    
        fix a layering violation: VirtRegRewriter.cpp shouldn't use AsmPrinter.h.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95748 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8fc7b209da57482995dcb78a33449bcb3ceacbf
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 10 01:22:57 2010 +0000
    
        Remove duplicated #include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95747 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93726d8a787a3b1e7d709ea6259eca7eb8bdd6c8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 10 01:21:02 2010 +0000
    
        Emit an error for illegal inline asm constraint (which uses illegal type) rather than asserting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95746 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b2322be1ee42dbf4dd1df42e3cb097b231f7603
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:17:36 2010 +0000
    
        fix missing #includes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95745 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 28afd8223a04f38c87746e0fe8c7e1e251db0a97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:05:28 2010 +0000
    
        daniel *really* likes fixups!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e90535cbb30c071096987e9f5ae2114f3132ea96
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 01:04:16 2010 +0000
    
        Stop MachineInstr.h from #including AsmPrinter.h
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95741 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce5a93e4cdc1ca70c5dc436453f227b3d0fcb715
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Feb 10 00:59:47 2010 +0000
    
        Improve comments a even more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95740 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d832aa8fb8044b80a86273493c70770537bcce14
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 00:55:42 2010 +0000
    
        Skip DBG_VALUE many places in live intervals and
        register coalescing.  This fixes many crashes and
        places where debug info affects codegen (when
        dbg.value is lowered to machine instructions, which
        it isn't yet in TOT).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95739 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f87769d484291b79abcbce9c1d93e189a9429529
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 00:47:53 2010 +0000
    
        Move verbose asm instruction comments to using MCStreamer.
        The major win of this is that the code is simpler and they
        print on the same line as the instruction again:
    
                movl    %eax, 96(%esp)          ## 4-byte Spill
                movl    96(%esp), %eax          ## 4-byte Reload
                cmpl    92(%esp), %eax          ## 4-byte Folded Reload
                jl      LBB7_86
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db7b35ff8026dfdbc0d33da63a3a7c19b6e656ea
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Wed Feb 10 00:45:28 2010 +0000
    
        Improve comments a bit more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c711aa69b6220492b543f42e477dc1b92704450
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 00:44:23 2010 +0000
    
        more comment updates
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95736 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit de4b068361b29005fab942ac14ac8f12be1b743e
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 00:41:49 2010 +0000
    
        Add isDebug argument to ChangeToRegister; this prevents
        the field from being used uninitialized later in some cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95735 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8e397b58489fd96a45e69c7d7710eab6d5cab18
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 00:36:00 2010 +0000
    
        print all the newlines at the end of instructions with
        OutStreamer.AddBlankLine instead of textually.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95734 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf50ddc93cf854eb803d580e07b4b2276a4e6430
    Author: Kenneth Uildriks <kennethuil at gmail.com>
    Date:   Wed Feb 10 00:14:03 2010 +0000
    
        IntegerValType holds a uint32_t, so its constructor should take a uint32_t.  This allows it to be properly initialized with bit widths > 65535
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95731 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94f6097942b3dd9b13c2c71a74b7f7c034cc70e0
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 10 00:11:11 2010 +0000
    
        Fix comments to reflect renaming elsewhere.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95730 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3d9d97cddb404caf738a7b494b931b17e8f585a8
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Wed Feb 10 00:10:31 2010 +0000
    
        Fix the encoding of the movntdqa X86 instruction.  It was missing the 0x66
        prefix which is part of the opcode encoding.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 59ee3d246bb7292180f55bdc97b9d0ace583f7db
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 10 00:10:18 2010 +0000
    
        Add ability for MCInstPrinters to add comments for instructions.
        Enhance the x86 backend to show the hex values of immediates in
        comments when they are large.  For example:
    
                movl    $1072693248, 4(%esp)    ## imm = 0x3FF00000
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0431e6c49421e302de8d311c1a5581549fc1f18
    Author: David Greene <greened at obbligato.org>
    Date:   Tue Feb 9 23:52:19 2010 +0000
    
        TableGen fragment refactoring.
    
        Move some utility TableGen defs, classes, etc. into a common file so
        they may be used my multiple pattern files.  We will use this for
        the AVX specification to help with the transition from the current
        SSE specification.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a6844b9567642a99a459eee65f8bce25dbe7449
    Author: Garrison Venn <gvenn.cfe.dev at gmail.com>
    Date:   Tue Feb 9 23:22:43 2010 +0000
    
        Adds a JIT based exception handling example to the examples directory.
        Both zero cost example domain specific, and C++ foreign exception handling are
        shown. The example's documentation fully explains how to run the example.
    
        Notes:
    
        1)   The code uses an extremely simple type info model.
        2)   Only a single landing pad is used per unwind edge
             (one call to llvm.eh.selector)
        3)   llvm.eh.selector support for filter arguments is not given.
        4)   llvm.eh.typeid.for is not used.
        5)   Forced unwind behavior is not supported.
        6)   Very little if any error handling is given.
        7)   __attribute__((__aligned__)) is used.
        8)   The code uses parts from the llvm compiler-rt project and
             the llvm Kaleidoscope example.
        9)   The code has not been ported or tested on WINDOWS.
        10)  The code was not tested with a cmake build.
        11)  The code was tested for a debug build on 32bit X86 CentOS LINUX,
             and both a debug and release build on OS X 10.6.2 (64bit).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 490852e92928683702e224b0a92a4d5a333c3bf7
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 9 23:06:35 2010 +0000
    
        Fixed some indentation in the AsmWriterInst
        implementation.  Also changed the constructor
        so that it does not require a Record, making it
        usable by the EDEmitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95715 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23d193a06140d17e33ddb4428a71f67d8d3992a6
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Feb 9 23:05:23 2010 +0000
    
        Add VBIF/VBIT for disassembly only.
        A8.6.279
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95713 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c321472b1f49b10f261f74508c40cb5ce72431da
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Feb 9 23:03:44 2010 +0000
    
        Make --disable-libffi work on systems with libffi installed.  Also
        make no-ffi the default even on systems with libffi.  This fixes
        http://llvm.org/PR5018.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95712 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1bdcf1fa7cdd98c7b4da719dd74ebfec1f4e027
    Author: David Greene <greened at obbligato.org>
    Date:   Tue Feb 9 23:03:05 2010 +0000
    
        Only dump output in debug mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95711 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ab57543e713bc754b2640e3747c2ff65457b63c2
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 9 23:00:14 2010 +0000
    
        llvm-mc: Add --show-fixups option, for displaying the instruction fixup information in the asm comments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95710 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e466b68c58bdc2ce05dbabdc02783c53b955500b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 9 23:00:03 2010 +0000
    
        MC/X86: Add a dummy implementation of MCFixup generation for hacky X86 MCCodeEmitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95709 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c5a052a84947f80567a72d99944f943b79d9f76b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 9 22:59:55 2010 +0000
    
        MC: First cut at MCFixup, for getting fixup/relocation information out of an MCCodeEmitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8f02384dcbe2c57d4a0798687cdc06ebcdf0845
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Feb 9 22:49:16 2010 +0000
    
        Improve comments in the LSDA somewhat. They can be improved much more.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95707 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63ac41c0db09647d6469560b8803ee8d0a2d8903
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Feb 9 22:35:38 2010 +0000
    
        Added VMRS/VMSR for disassembly only.
        A8.6.335 & A8.6.336
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ecadd7d8ef99681205ed4527530cda4773a5bbbb
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 9 22:29:16 2010 +0000
    
        Added AsmWriterInst.cpp to the CMakeList so that
        it builds OK on Visual Studio.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95702 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78db1ba08fe914adb89e6c66e7065a7c0eb202ea
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 9 22:15:27 2010 +0000
    
        Disable unittests/ADT/BitVectorTest on PPC Darwin.
        It fails with a release build only, for reasons
        as yet unknown.  (If there's a better way to Xfail
        things here let me know, doesn't seem to be any
        prior art in unittests.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f373fc3257d5b0aa0fcfc59298f8033d2643078b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 21:57:34 2010 +0000
    
        port encoder enhancements over to the new encoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95699 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3b3acf9317ca5fac7207f3493583422f7a9400f
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 9 21:50:41 2010 +0000
    
        Per PR 6219, factored AsmWriterInst and AsmWriterOperand
        out of the AsmWriterEmitter.  This patch does the physical
        code movement, but leaves the implementation unchanged. I'll
        make any changes necessary to generalize the code in a
        separate patch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e8b1aac62bd8a74b004e4abed07d2f693a4cd365
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 21:47:19 2010 +0000
    
        fix X86 encoder to output [disp] only addresses with no SIB byte
        in X86-32 mode.  This is still required in x86-64 mode to avoid
        forming [disp+rip] encoding.  Rewrite the SIB byte decision logic
        to be actually understandable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95693 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d47ee14b397d7315c3fa1767cb01bd6823daa87
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 9 21:24:27 2010 +0000
    
        Move Intrinsic::objectsize lowering back to InstCombineCalls and
        enable constant 0 offset lowering.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 616e35969b69d0fcb063634b465a41c1bf8a817a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 21:21:26 2010 +0000
    
        revert r95689: getX86RegNum(BaseReg) != N86::ESP is
        a confusing idiom to check for ESP or RSP.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95690 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 20f133563c8c6dd09f4c754e6236bc79801e0adf
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 21:00:12 2010 +0000
    
        simplify.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95689 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a50aac9641027f9608ce3ab34664266bab982e7
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 9 19:54:29 2010 +0000
    
        Re-disable for Darwin; I was mistaken to think this was fixed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95688 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 261a07e6802837ca121afbd35aa67d66b8cddf79
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 19:54:29 2010 +0000
    
        move target-independent opcodes out of TargetInstrInfo
        into TargetOpcodes.h.  #include the new TargetOpcodes.h
        into MachineInstr.  Add new inline accessors (like isPHI())
        to MachineInstr, and start using them throughout the
        codebase.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95687 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 10b4c2ecc28cb21e6038f53185a0409a12ba02fc
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Feb 9 19:51:37 2010 +0000
    
        Radar 7417921
    
        tMOVCCi pattern only valid for low registers, as the Thumb1 mov immediate to
        register instruction only works with low registers. Allowing high registers
        for the instruction resulted in the assembler choosing the wide (32-bit)
        encoding for the mov, but LLVM though the instruction was only 16 bits wide,
        so offset calculations for constant pools became incorrect, leading to
        out of range constant pool entries.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95686 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6ab0b29165870293ea8a9c896fcda6bb15f685ab
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Feb 9 19:07:19 2010 +0000
    
        Add support for TypeBuilder<const/volatile void*, false>.
        Thanks to Jochen Wilhelmy for the suggestion!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95677 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 01f65f0fa690ad8c066d2b79b9c0ffe51f7e300a
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 9 17:29:18 2010 +0000
    
        Pull these back out, they're a little too aggressive and time
        consuming for a simple optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3dd1b1df4e46cfa0453b91c17d5d5513b0573262
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Feb 9 17:24:21 2010 +0000
    
        Oops.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95670 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 887f91b80eeb4259ebcf31ed1404fdc7eaec8d4c
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Feb 9 17:21:56 2010 +0000
    
        Added vcvtb/vcvtt (between half-precision and single-precision, VFP).
        For disassembly only.
    
        A8.6.300
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95669 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 541e03a3b0eee5b3d76cc1ae3cd8544e40476e7e
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Feb 9 17:20:11 2010 +0000
    
        Remember to update live-in lists when coalescing physregs.
    
        Patch by M Wahab!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95668 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aa2cc940fb5cc78df37805e32e5f2c1650107b3a
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Feb 9 17:20:03 2010 +0000
    
        clang test suite
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f5839e96c8d3207d3640a2c2dc07a2ab5ce18630
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 9 17:00:40 2010 +0000
    
        Mention IndVarSimplify in the comment by getSmallConstantTripCount, as
        is done for getTripCount.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95666 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6fc8fab2a22b9ef1bc4a9d9c5dd8077a87a6554
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 9 16:59:14 2010 +0000
    
        Mention vAny and iPTRAny in a comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95665 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d2609573cf0c9beb247b7f0e8c9d280bce8656ac
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:41:03 2010 +0000
    
        move tests that depend on the x86 backend out of codegen/generic,
        and remove a few old and unreduced ones.  Fixes PR5624.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95656 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 50c7723419a8a2e74c9f511917e120a8a9f4f45f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:36:30 2010 +0000
    
        make target independent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95655 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0392fa4a69ca47c515211ce34325408b3dedb97e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:35:50 2010 +0000
    
        merge a target-specific add test into x86 directory.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95654 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 636fef000f14238f8efdf5308e0630a849f35a7f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:33:27 2010 +0000
    
        merge another test in, drop the trivially constant folded cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95653 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3c0bc8b5d2cdd6217ee00202582d426996cd29e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:24:00 2010 +0000
    
        consolidate and filecheckize two tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95652 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit db71d597a8f3632d5e8c7417fa5247dc589543cc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 06:19:20 2010 +0000
    
        merge two tests, make target independent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95651 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 46bf694a136bd056d2546abe8a1d6b74f0a56efa
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 05:55:14 2010 +0000
    
        move PR3462 to here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95650 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a3f972db36312b043592d64b359686f77a2d1e0c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 05:45:29 2010 +0000
    
        add a note from PR6194
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95649 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a70edceec20bee694492901eb019aa97b8f27e75
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 9 02:01:46 2010 +0000
    
        Skip DEBUG_VALUE in some places where it was affecting codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95647 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ad7410a2b169cd871652cbeeda802c3955960e69
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Feb 9 01:58:33 2010 +0000
    
        Add declaration attribute to a variable DIE, if there is a separate DIE for the definition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95646 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2f916df02dd583e77cc2414ce0c5d6a26a42ecf6
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 9 01:50:54 2010 +0000
    
        Updated the enhanced disassembly library to produce
        whitespace tokens in the right places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95645 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8257d65e9602988beb2bc4f85a07ba4ef52722ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 01:39:46 2010 +0000
    
        fix llvm_build_struct_gep for PR6167, patch by
        Peter Hawkins!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95644 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 420101c495eff5b9473802dcac80ab787ca2fc9a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 01:14:06 2010 +0000
    
        simplify this code, duh.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95643 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23480f2817a488aad23eea3858b328fbccbc85b5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 01:12:41 2010 +0000
    
        fix PR6193, only considering sign extensions *from i1* for this
        xform.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95642 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f11751b8b463eed987b81ec2352fff4a2ba654e
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 9 01:11:03 2010 +0000
    
        Add file in here too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95641 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51cc5e94852ad9d40e2715b6e22f285b68b6b339
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 9 01:00:18 2010 +0000
    
        Fixed a problem where the enhanced disassembly
        library was reporting inaccurate token IDs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95639 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c092b994e77608bec7eae1478d7f39d0479dc107
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:54:51 2010 +0000
    
        make -show-inst be formatted a bit nicer.  Before:
    
        	movl	$3735928559, a          ## inst: <MCInst 1273 <MCOperand Reg:0> <MCOperand Imm:1> <MCOperand Reg:0> <MCOperand Expr:(a)> <MCOperand Reg:0> <MCOperand Expr:(3735928559)>>
    
        after:
    
        	movl	$3735928559, a          ## <MCInst #1273
                                                ##   <MCOperand Reg:0>
                                                ##   <MCOperand Imm:1>
                                                ##   <MCOperand Reg:0>
                                                ##   <MCOperand Expr:(a)>
                                                ##   <MCOperand Reg:0>
                                                ##   <MCOperand Expr:(3735928559)>>
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95637 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73aae2a863dc7bdc27506b9d9f8c037cb35bc810
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Feb 9 00:50:27 2010 +0000
    
        Fixed a bug in the PBQP allocator's findCoalesces method.
    
        Previously spill registers, whose def indexes are not defined, would sometimes be improperly marked as coalescable with conflicting registers. The new findCoalesces routine conservatively assumes that any register with at least one undefined def is not coalescable with any register it interferes with.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95636 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f8fdb73df3da757998d2e067b1f62a2b4df1f53d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:49:22 2010 +0000
    
        Implement x86 asm parsing support for %st and %st(4)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b2795d4e1854a9d649280895bea2ead196e45c3
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Feb 9 00:45:48 2010 +0000
    
        Added copy sensible construction & assignment to PBQP graphs and fixed a memory access bug in the heuristic solver.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95633 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4895f0c2e0218dfb21c8e52849e1191b8db23734
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 9 00:42:08 2010 +0000
    
        Debug operands should not be def or kill.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2a60d78777508f981da8ba050a16009751e6bcf0
    Author: Lang Hames <lhames at gmail.com>
    Date:   Tue Feb 9 00:41:23 2010 +0000
    
        Changed the definition of an "invalid" slot to include the empty & tombstone values, but not zero.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95631 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4c9a75bd96cec540f28ec731274329dfcf2b2b7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:40:07 2010 +0000
    
        stop using reserved identifiers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95630 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e45e9747b90f8717f22e1c433e5208ad7aee3750
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 9 00:35:38 2010 +0000
    
        Add a new pass to do llvm.objsize lowering using SCEV.
        Initial skeleton and SCEVUnknown lowering implemented,
        the rest should come relatively quickly.  Move testcase
        to new directory.
    
        Move pass to right before SimplifyLibCalls - which is
        moved down a bit so we can take advantage of a few opts.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d332011a64483f20d6095defb03d03ce0dbd32f1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:34:28 2010 +0000
    
        pass stringref by value instead of by const&
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38600491dcf049ee6444b4c9b3a5e805b3ed89c9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 9 00:29:29 2010 +0000
    
        Add explicit keywords.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 945d6e6ef5500953740ddf25d349b7d8553b3539
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:11:10 2010 +0000
    
        move PR6212 to this file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 60992d698f56b41c8052610257dd0a84900e8b96
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 9 00:05:45 2010 +0000
    
        enhance bits_storage to work with enums by using a c-style
        cast instead of reinterpret_cast, fixing PR6243.  Apparently
        reinterpret_cast and I aren't getting along today.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95622 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47a9f41cc78716c82d2bbe2ce4b0211d33a916cf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 9 00:02:37 2010 +0000
    
        Implement AsmPrinter support for several more operators which have
        direct MCExpr equivalents. Don't use MCExpr::Shr because it isn't
        consistent between targets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2376dd8e47d549492464909444dff454c23a7b3d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 23:58:47 2010 +0000
    
        Document that MCExpr::Mod is actually remainder.
    
        Document that MCExpr::Div, Mod, and the comparison operators are all
        signed operators.
    
        Document that the comparison operators' results are target-dependent.
    
        Document that the behavior of shr is target-dependent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8479c1b655a03a1c166221803312525333cc2f46
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 23:56:03 2010 +0000
    
        fix some problems handling large vectors reported in PR6230
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95616 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cba43384cca80445e3c3f75208aeba9cfc3c25e4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 23:48:10 2010 +0000
    
        this is done, tested by CodeGen/ARM/iabs.ll
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95609 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 015b886f496c17e3283b12ee9898e3795fe17964
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 23:47:34 2010 +0000
    
        convert to filecheck.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95608 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b31a0feb5bf1db49cb3a9be767650a7b6a4a5f74
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 8 23:34:25 2010 +0000
    
        Added header file declarations and .exports entries
        for the new APIs offered by the enhanced disassembler
        for inspecting operands.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95606 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4e2d98e4e90b4cd1042261bf561b83cd3971d5a
    Author: Devang Patel <dpatel at apple.com>
    Date:   Mon Feb 8 23:27:46 2010 +0000
    
        test case for r95604.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95605 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 98b68dabfc5f27450a8827437dbc66ce9736f987
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Mon Feb 8 23:22:00 2010 +0000
    
        tighten up eh.setjmp sequence a bit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95603 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f444ae4711a92742b13d76d3174611058cfb2c40
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 23:10:08 2010 +0000
    
        now that @GOTOFF is no longer represented as a suffix on a
        MCSymbol, we can remove the 'suffix' argument of
        GetBlockAddressSymbol.  Do so.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7566f7ea341feb29e7d06d560ea0468f9da7e76
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 23:03:41 2010 +0000
    
        unify the paths for external symbols and global variables:
         2 files changed, 48 insertions(+), 83 deletions(-)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95599 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 878b5bee0314e26770cf4c4940c250c68707d1c5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 22:52:47 2010 +0000
    
        switch the rest of the "@ concatentation" logic in the X86
        backend to use X86MCTargetExpr, simplifying a bunch of code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4b7310a90a8dc31334903ce141fb6728bef4a0d
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 8 22:50:23 2010 +0000
    
        Fixed the AT&T AsmLexer to report the proper strings
        for register tokens.  Before, if it encountered
        '%al,' it would report 'al,' as the token.  Now it
        correctly reports '%al'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95594 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3c9bf8bf6d2ac83f79f0e3d5f3f7de7ae1579ef9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 22:33:55 2010 +0000
    
        switch ELF @GOTOFF references to use X86MCTargetExpr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95593 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36657aab3e97cf1ca68815bf7eba7b5b4e080b30
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 22:19:11 2010 +0000
    
        ConstantFoldConstantExpression can theoretically return the original
        expression; don't go into an infinite loop if it does.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95591 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0fdcbcae5f72970bc4c9732ecd9c72d0e4bed5e1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 22:09:08 2010 +0000
    
        add an x86 implementation of MCTargetExpr for
        representing @GOT and friends.  Use it for
        personality references as a first use.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95588 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0d2e850f5ff5a513c8ab0177b034361f2f489cba
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 22:07:36 2010 +0000
    
        don't make hte dtor private or we can't construct the class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0273f0d4692800e282fa4a1babcf3511a22fe8a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 22:05:38 2010 +0000
    
        use a c-style cast instead of reinterpret-cast, as sometimes the
        cast needs to adjust for a vtable pointer when going from base to
        derived type (when the base doesn't have a vtable but the
        derived type does).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95585 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1240766902e48f93afc1b46520bc146007bdb9f
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Feb 8 22:02:41 2010 +0000
    
        Add VCVTR (between floating-point and integer, VFP) for disassembly purpose.
        The 'R' suffix means the to-integer operations use the rounding mode specified
        by the FPSCR, encoded as Inst{7} = 0.
    
        A8.6.295
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95584 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea0b08efa466c274c270305165ea473406353d7a
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 22:02:38 2010 +0000
    
        When CodeGen'ing unoptimized code, there may be unfolded constant expressions
        in global initializers. Instead of aborting, attempt to fold them on the
        spot. If folding succeeds, emit the folded expression instead.
    
        This fixes PR6255.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95583 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 642a130ec467059b96527271834db7ec40c5eb0c
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 22:00:06 2010 +0000
    
        Add const qualifiers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 739b7feb51bda6378f4a403e248cdb36ac4d28e0
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Feb 8 21:53:27 2010 +0000
    
        Apply the 95471 fix to SelectionDAGBuilder as well;
        we can get in here if FastISel gives up in a block.
        (Actually the two copies of this need to be unified.  Later.)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95579 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95d7e60ad7d30e54f42e40ac6d6362e8ed079821
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 20:34:14 2010 +0000
    
        In guaranteed tailcall mode, don't decline the tailcall optimization
        for blocks ending in "unreachable".
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95565 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a64ddd5cca43fd3abfaa3191f01ff1ecc762ada4
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 8 20:27:50 2010 +0000
    
        Rename the PerformTailCallOpt variable to GuaranteedTailCallOpt to reflect
        its current purpose.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95564 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d81583775836536a9e7a7c2d0140ab236a67f98
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Feb 8 19:41:48 2010 +0000
    
        Add VCMP (VFP floating-point compare without 'E' bit set) for disassembly purpose.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95560 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1020ff0702994f0ae376d644967f824716e11182
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 8 19:41:07 2010 +0000
    
        add scaffolding for target-specific MCExprs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95559 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f85fa82f062103cbbf764911e4c29720bbdee01
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Feb 8 19:36:51 2010 +0000
    
        Flesh out the list of predicates, for those who like this style.  I was
        looking for isPointer, and added the rest for uniformity.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95557 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 683f4fdca2b99ea8313f4ec9e95d91f1ceadbe2b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Mon Feb 8 18:08:46 2010 +0000
    
        ImmutableIntervalMap: Fix for unqualified lookup into dependent base class, done
        by clang's -fixit! :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95551 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4bd310a7b73b1e80acb35c8c415db368ca854d0
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Feb 8 17:26:09 2010 +0000
    
        Added VMOVRRS/VMOVSRR to ARMInstrVFP.td for disassembly purpose.
    
        A8.6.331 VMOV (between two ARM core registers and two single-precision registers)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95548 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 11449e728495f679b4c6b67dc3d656cd04796f9a
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Feb 8 11:03:31 2010 +0000
    
        Fix some typos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95542 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e8ce476cbf724f9490deefccf6a170b7a1330a7
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Mon Feb 8 08:37:27 2010 +0000
    
        Fix x86 JIT stub on MSVC.
        Thanks to Kristaps Straupe for noticing the bug.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95537 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abd3dd6ba14de9b44290bb2b4861406fdc076fa0
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Mon Feb 8 06:08:32 2010 +0000
    
        Fixed build error for redefinition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95532 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 022752dff182da48cd5f6db70a90a331d7aaaa46
    Author: Sanjiv Gupta <sanjiv.gupta at microchip.com>
    Date:   Mon Feb 8 05:56:37 2010 +0000
    
        Add uppercase and lowercase part defines in driver.
        Use a temp dir with a unique name in the current dir itself.
        Use forward_value instead of unpack_values.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95530 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 640885d04fc3bdd4d001fea92ef486a325fb0d72
    Author: Nick Lewycky <nicholas at mxc.ca>
    Date:   Sun Feb 7 21:13:46 2010 +0000
    
        Make the destructor for TypeMapBase protected. Spotted by Duncan Sands with
        cppcheck!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95527 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b29f6887c9b9ca1815e71885a14de6f4119a61ee
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Sun Feb 7 21:09:22 2010 +0000
    
        Give DwarfPrinter a protected (but not virtual) destructor.  Cppcheck
        warns about this base class not having a virtual destructor, but since
        this class has no virtual methods and neither it or the types derived
        from it has a destructor, a protected trivial destructor will do (and
        shuts cppcheck up) the trick without the cost of introducing a vtable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95526 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78506244d7ca04009b74cba63a767bfff889362b
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Sat Feb 6 21:00:02 2010 +0000
    
        Add suport for VASTART on Mips.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95506 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac629f51303d4578b4024ea9112bdea846b1fb6b
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Sat Feb 6 19:20:49 2010 +0000
    
        First step towards varargs support in Mips:
        - o32 cc must pass all arguments in A0...A3 and stack regardless
        if its type (but respect the alignment).
        - Store all variable arguments back to the caller stack.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95500 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4377cdcc95e445d0cd120d7911bf73603ca1e1c1
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Feb 6 09:07:11 2010 +0000
    
        Run codegen dce pass for all targets at all optimization levels. Previously it's
        only run for x86 with fastisel. I've found it being very effective in
        eliminating some obvious dead code as result of formal parameter lowering
        especially when tail call optimization eliminated the need for some of the loads
        from fixed frame objects. It also shrinks a number of the tests. A couple of
        tests no longer make sense and are now eliminated.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95493 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63d86fe59317d155058309d486d9b12c9749af7e
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Feb 6 09:00:30 2010 +0000
    
        Remove a large test case that (soon will) no longer make sense.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95492 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ebcb016721cf15218e983e00896d7d0fd9c69aee
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Feb 6 05:55:20 2010 +0000
    
        Fix an uninitialized value.  Radar 7609421.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95488 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ddc00c2e5437a927e17e5f9990e95c3adf202a01
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Sat Feb 6 03:32:21 2010 +0000
    
        Fix alignment on ppc linux. This fixes the build of crtend.o
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95477 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a338d790c56083d455bc753f473949f7c77ee3e9
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Feb 6 03:28:46 2010 +0000
    
        Do not emit callseq instructions around sibcalls. This eliminated some unnecessary stack adjustments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95475 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4fdd461c821799cfb547167b3f6dc339a0d4531c
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Sat Feb 6 02:28:32 2010 +0000
    
        Add a Debug bit to MachineOperand, for uses that
        are from debug info.  Add an iterator to MachineRegisterInfo
        to skip Debug operands when walking the use list.  No
        functional change yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95473 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8ce1aefd18dd02f4a7a780423d83f0934f2ff883
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Sat Feb 6 02:26:02 2010 +0000
    
        After Victor's latest commits I am seeing null
        addresses in dbg.declare; ignore this for the
        moment to prevent things from breaking.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95471 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8d5b21baf41576fa4016a3c6d838c1b77d8f000b
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Feb 6 01:31:55 2010 +0000
    
        Linker should not remap null operands of metadata
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95468 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a48cde4a81346c7b01a3359b48d0fdb1434b2ec2
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Sat Feb 6 01:21:09 2010 +0000
    
        Function-local metadata whose operands had been optimized to no longer refer to function-local IR were not getting written by BitcodeWriter; solution is for these metadata to be enumerated just like global metadata.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95467 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3f30baafce7c09af90bb6e9e79c152f9eb4b7b0b
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Sat Feb 6 01:16:28 2010 +0000
    
        Reintroduce the InlineHint function attribute.
    
        This time it's for real! I am going to hook this up in the frontends as well.
    
        The inliner has some experimental heuristics for dealing with the inline hint.
        When given a -respect-inlinehint option, functions marked with the inline
        keyword are given a threshold just above the default for -O3.
    
        We need some experiments to determine if that is the right thing to do.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95466 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 268778b077fe46127e6a00235ce728bf64858387
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Feb 6 01:16:25 2010 +0000
    
        Add a test for my change to disable reassociation for i1 types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95465 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 13ccbeb95d19fd81634289f1bb7ecdd1bebd056f
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Feb 6 01:02:37 2010 +0000
    
        Set DW_AT_artificial only if argument is marked as artificial.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95461 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac664e555198be231007a735bd10b376fbf08c59
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Feb 6 00:24:38 2010 +0000
    
        Handle AddrMode6 (for NEON load/stores) in Thumb2's rewriteT2FrameIndex.
        Radar 7614112.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95456 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f5a066d16bee2846ae86747422154fae74382ce
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Feb 5 23:21:31 2010 +0000
    
        Don't unroll loops containing function calls.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95454 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed4441a21fde2e7c649dff89fbc78469111e2be9
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Feb 5 23:21:18 2010 +0000
    
        Update CodeMetrics to count 'big' function calls explicitly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95453 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a27bd000ed8d5a90462c151449de482167518d73
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Feb 5 23:09:20 2010 +0000
    
        Do not generate specification DIE for nested functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95452 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f77c07ec9ec5fce04c1c12449d86e8aa76d89958
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:56:11 2010 +0000
    
        fix incorrect encoding of SBB8mi that Kevin noticed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95448 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb382fa1126c564624a73f3f3c342e7257c65965
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:49:06 2010 +0000
    
        fix a case where we'd mis-encode fisttp because of an incorrect (and
        redundant with a correct one) pattern that was added for the disassembler.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95446 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 58758822aa166beecd3633bb2dcc0d1cb40270f7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:48:33 2010 +0000
    
        add note.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95445 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76c3b4645cfb88bed06ac303e3bd5eca7fbd76d3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:46:46 2010 +0000
    
        remove fixme
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95444 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fede66007333c9d37b55becc6478c660ffbb2f97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:20:08 2010 +0000
    
        print encodings like this:
        	pslld	69, %mm3                ## encoding: [0x0f,0xf2,0x1c,0x25,0x45,0x00,0x00,0x00]
    
        instead of like this:
        	pslld	69, %mm3                ## encoding: [0x0f,0xf2,0x1c,0x25,0x45,0000,0000,0000]
    
        this only affects 0.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95441 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d29fd16ba6730e5d468b3c801036defc38d4fee8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 22:10:22 2010 +0000
    
        port X86InstrInfo::determineREX over to the new encoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95440 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e263df713726cad8194b57e152b239fefe116328
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Feb 5 22:03:18 2010 +0000
    
        Teach SimplifyCFG about magic pointer constants.
    
        Weird code sometimes uses pointer constants other than null. This patch
        teaches SimplifyCFG to build switch instructions in those cases.
    
        Code like this:
    
        void f(const char *x) {
          if (!x)
            puts("null");
          else if ((uintptr_t)x == 1)
            puts("one");
          else if (x == (char*)2 || x == (char*)3)
            puts("two");
          else if ((intptr_t)x == 4)
            puts("four");
          else
            puts(x);
        }
    
        Now becomes a switch:
    
        define void @f(i8* %x) nounwind ssp {
        entry:
          %magicptr23 = ptrtoint i8* %x to i64            ; <i64> [#uses=1]
          switch i64 %magicptr23, label %if.else16 [
            i64 0, label %if.then
            i64 1, label %if.then2
            i64 2, label %if.then9
            i64 3, label %if.then9
            i64 4, label %if.then14
          ]
    
        Note that LLVM's own DenseMap uses magic pointers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95439 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ea6cb061194032ea1cc02f3a0fa464862414c8c7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:51:35 2010 +0000
    
        wire up 64-bit MCCodeEmitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95438 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca4ccadd5a260f18e4a0028eb8e98eb66aed0119
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:34:18 2010 +0000
    
        really kill off the last MRMInitReg inst, remove logic from encoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95437 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f2187ef165bbd8862570d9a442efa069a2f21c32
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:30:49 2010 +0000
    
        lower the last of the MRMInitReg instructions in MCInstLower.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95435 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1d1881dc28eaf484040bae6693cd05a37268f3e6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:21:06 2010 +0000
    
        teach X86MCInstLower to lower the MOV32r0 and MOV8r0
        pseudo instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95433 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a7f3c694c409396d5afc89a011156d4ba0cf540
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:15:57 2010 +0000
    
        genericize helpers, use them for MOV16r0/MOV64r0
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95432 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c9facc0f5829c9a757001a9be72fe525a7092c6
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 21:13:48 2010 +0000
    
        factor code better in X86MCInstLower::Lower, teach it to
        lower the SETB* instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95431 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e91ee7ed44cec2961c224a7658517bfdb571339
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:53:02 2010 +0000
    
        fix logical-select to invoke filecheck right, and fix hte instcombine
        xform it is checking to actually pass.  There is no need to match
        m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).
    
        Add matches for sext(not(x)) in addition to not(sext(x)).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95420 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bef7eae141f95e6f9bac2d89e32821aac60be254
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:37:31 2010 +0000
    
        implement the rest of the encoding types.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95414 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e49d7f0238b69efa30030feaa7c0902de2d34372
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:24:13 2010 +0000
    
        move functions for decoding X86II values into the X86II namespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95410 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e124253d75ebdf633ea46bfc88249f0d8816da80
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 5 19:24:11 2010 +0000
    
        Implement releaseMemory in CodeGenPrepare and free the BackEdges
        container data. This prevents it from holding onto dangling
        pointers and potentially behaving unpredictably.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95409 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 00da15e44575b6dd929c98ffcea255fd1e3c417f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:20:30 2010 +0000
    
        constant propagate a method away.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95408 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d288fa5c437772beb0e8fd80321f3d30483bec69
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Feb 5 19:20:15 2010 +0000
    
        Use a SmallSetVector instead of a SetVector; this code showed up as a
        malloc caller in a profile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95407 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4cb61fd7b7f6ab2a46be75570f738b78d4759c7b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:16:26 2010 +0000
    
        change getSizeOfImm and getBaseOpcodeFor to just take
        TSFlags directly instead of a TargetInstrDesc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95405 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee40acfedd0178837836456fd0be4056a490c481
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 19:04:37 2010 +0000
    
        add some more encodings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95403 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0444fea9072a78d761b68d00d7d65596a13a2439
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Feb 5 19:04:06 2010 +0000
    
        Remove this code for now. I have a better idea and will rewrite with
        that in mind.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95402 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aed0106212e26a122c13834c5a99aebcd94e7caa
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Feb 5 18:09:19 2010 +0000
    
        Make lit's gtest support honor config.environment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95398 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f4af5e911cadff4f2f5a01d8bace2adc5cadbaeb
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Feb 5 18:04:58 2010 +0000
    
        VMOVRRD and VMOVDRR both have Inst{7-6} = 0b00.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95397 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 56fbc4ede9610b426e5dbf62a5f7362082c2ce2e
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Feb 5 16:19:36 2010 +0000
    
        Move --march, --mcpu, and --mattr from JIT/TargetSelect.cpp to lli.cpp.
        llc.cpp also defined these flags, meaning that when I linked all of LLVM's
        libraries into a single shared library, llc crashed on startup with duplicate
        flag definitions.  This patch passes them through the EngineBuilder into
        JIT::selectTarget().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95390 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 739d920a390fa32de22d3559c7f330e664b8cbed
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Feb 5 11:21:05 2010 +0000
    
        Make test more fucused eliminating extraneous bits.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95384 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3e9ffb3d5bf4f0694d95396cf9be6236eb7b7705
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Feb 5 07:32:18 2010 +0000
    
        MC: Change default comment column to 40 characters.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95378 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1253ac260d7878f554e92d5faca350e05e3da1c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 5 06:37:00 2010 +0000
    
        Fix test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95373 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0216b2e36ecb4f9495a76c6742b5130fc28f554
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 06:16:07 2010 +0000
    
        implement the non-relocation forms of memory operands
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95368 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1af931d655ccd22047e6f10b4bc87275099048d3
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 5 02:21:12 2010 +0000
    
        Handle tail call with byval arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95351 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9f1617ad56becb49ba3acd0f0277a852e747c7d4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 02:18:40 2010 +0000
    
        start adding MRMDestMem, which requires memory form mod/rm encoding
        to start limping.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95350 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb61e493e8166c3c5fa2e9be00cf7c2258a0454c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Fri Feb 5 01:53:19 2010 +0000
    
        Add a few more encodings, we can now encode all of:
    
        	pushl	%ebp
        	movl	%esp, %ebp
        	movl	$42, %eax
        	popl	%ebp
        	ret
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95344 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14eaa6d49a9c27f5ca3568336f423dbd3a9a362b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Feb 5 01:27:11 2010 +0000
    
        When the scheduler unfold a load folding instruction it move some of the predecessors to the unfolded load. It decides what gets moved to the load by checking whether the new load is using the predecessor as an operand. The check neglects the cases whether the predecessor is a flagged scheduling unit.
        rdar://7604000
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95339 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7b8b3190e0b394f680aec4a3a2bbeb8d9683f8c6
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Feb 5 00:17:02 2010 +0000
    
        An empty global constant (one of size 0) may have a section immediately
        following it. However, the EmitGlobalConstant method wasn't emitting a body for
        the constant. The assembler doesn't like that. Before, we were generating this:
    
          .zerofill __DATA, __common, __cmd, 1, 3
    
        This fix puts us back to that semantic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95336 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4754ce2bbdbc22901c895e808f6d3b59fd0e6ca5
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Thu Feb 4 23:32:37 2010 +0000
    
        Do not reassociate expressions with i1 type.  SimplifyCFG converts some
        short-circuited conditions to AND/OR expressions, and those expressions
        are often converted back to a short-circuited form in code gen.  The
        original source order may have been optimized to take advantage of the
        expected values, and if we reassociate them, we change the order and
        subvert that optimization.  Radar 7497329.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95333 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d0ee4d055a7df10fd00e7e2c79f49150794dc84b
    Author: Evan Phoenix <evan at fallingsnow.net>
    Date:   Thu Feb 4 19:56:59 2010 +0000
    
        Disable external stubs for X86-32 and X86-64
    
        Instruction selection for X86 now can choose an instruction
        sequence that will fit any address of any symbol, no matter
        the pointer width. X86-64 uses a mov+call-via-reg sequence
        for this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95323 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6d58f1c83e473d7ef4bbe5eca61b44caf2ee9217
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Feb 4 19:07:06 2010 +0000
    
        Fix typo Duncan noticed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95322 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a89cf2a84a9742cb18541da4cf7c9256ae4f72c
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 4 18:48:20 2010 +0000
    
        Increase inliner thresholds by 25.
    
        This makes the inliner about as agressive as it was before my changes to the
        inliner cost calculations. These levels give the same performance and slightly
        smaller code than before.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95320 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6858dd68e42d90b458b0b447612323bcaab6028c
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Thu Feb 4 18:46:28 2010 +0000
    
        Fix small bug in handling instructions with more than one implicitly defined operand.
    
        ProcessImplicitDefs would only mark one operand per instruction with <undef>.
        This fixed PR6086.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95319 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 224fafa77876e93609df4cbd2f1fc8189353f520
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Feb 4 18:40:11 2010 +0000
    
        Get the LLVMC tests working with clang++ by removing the problematic CXXFLAG in lit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95318 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08c61aa2f10ad1819dcb72b9cf97e63e9ae1aad0
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Feb 4 11:57:54 2010 +0000
    
        Apply property changes from PR6228.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95303 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 361dab7c6f2a1549ad87ace3fbfd8cb6845f7a4f
    Author: Edwin Török <edwintorok at gmail.com>
    Date:   Thu Feb 4 09:31:35 2010 +0000
    
        New flag for GenLibDeps, and llvm-config-perobjincl.
    
        This allows to show the explicit files that need to be built/linked to get an
        LLVM component.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95300 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5856a4fae7c87dc2938c1442f4ce73e9117ed52e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 4 07:32:01 2010 +0000
    
        move the PR6214 microoptzn to this file.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95299 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ec8ea91a09cc6dfaadb5c1c867467f1e4185e236
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 4 07:11:08 2010 +0000
    
        fix a broken archive that was breaking dejagnu only (not lit)
        after r95292
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95296 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8b36ef7f8be0244c1f43d84eed43178991668089
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Feb 4 06:47:24 2010 +0000
    
        Re-enable x86 tail call optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95295 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bd2bd23ea5f37c9bf362137e0fcab6ce798b9d37
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Feb 4 06:41:27 2010 +0000
    
        Temporarily revert this since it appears to have caused a build
        failure.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95294 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c102e823b361cdf9c0f7e0d95b3668e13617d7da
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 4 06:34:01 2010 +0000
    
        add support for the sparcv9-*-* target triple to turn on
        64-bit sparc codegen.  Patch by Nathan Keynes!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95293 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2d60cc5451bbcc875d8d0d4c42a3dc970a7469e8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Feb 4 06:19:43 2010 +0000
    
        From PR6228:
    
        "Attached patch removes the extra NUL bytes from the output and changes
        test/Archive/MacOSX.toc from a binary to a text file (removes
        svn:mime-type=application/octet-stream and adds svn:eol-style=native).  I can't
        figure out how to get SVN to include the new contents of the file in the patch
        so I'm attaching it separately."
    
        Patch by James Abbatiello!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95292 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb6f779a5ff24d22b74bb396232ac84b9386d682
    Author: Eric Christopher <echristo at apple.com>
    Date:   Thu Feb 4 02:55:34 2010 +0000
    
        Rework constant expr and array handling for objectsize instcombining.
    
        Fix bugs where we would compute out of bounds as in bounds, and where
        we couldn't know that the linker could override the size of an array.
    
        Add a few new testcases, change existing testcase to use a private
        global array instead of extern.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95283 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fa66ea30c2cf83a19157ec92b4e520f5cf4bd299
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Feb 4 02:45:02 2010 +0000
    
        It's too risky to eliminate sext / zext of call results for tail call optimization even if the caller / callee attributes completely match. The callee may have been bitcast'ed (or otherwise lied about what it's doing).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95282 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9462c3a8c6cabfac5824aaf1274a210edaca3993
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Feb 4 02:43:51 2010 +0000
    
        Change the argument to getIntegerSCEV to be an int64_t, rather
        than int. This will make it more convenient for LSR, which does
        a lot of things with int64_t offsets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95281 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c9b1478e9781dfd689a7297d401f259f67c3d68
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Feb 4 02:40:39 2010 +0000
    
        Indirect tail call has to go through a call preserved register since it's after callee register pops. X86 isel lowering is using EAX / R11 and it was somehow adding that to function live out. That prevented the real function return register from being added to the function live out list and bad things happen.
        This fixes 483.xalancbmk (with tail call opt).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95280 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d77df71803f0e9d49c876caa31f4ace2c4b4cd98
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Thu Feb 4 01:43:08 2010 +0000
    
        Filled in a few new APIs for the enhanced
        disassembly library that provide access to
        instruction information, and fixed ambiguous
        wording in the comments for the header.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95274 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0926ed7137ab99acade66f1c9bf746d040a2db27
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Feb 4 01:42:13 2010 +0000
    
        Use a tab instead of space after .type, for consistency.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95272 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 40787dcb640de2fc69c0291e1a788c46d0ed3d78
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Thu Feb 4 01:33:43 2010 +0000
    
        Rewrite FP constant handling in DEBUG_VALUE yet
        again, so it more or less handles long double.
        Restore \n removed in latest MC frenzy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95271 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5aa5f8dbf6127b2c4afe4f1bb15396e82fdca049
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Thu Feb 4 01:13:08 2010 +0000
    
        Fix (and test) function-local metadata that occurs before the instruction that it refers to; fix is to not enumerate operands of function-local metadata until after all instructions have been enumerated
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95269 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4fefacb214916e74d81d132fcb76a089e020499c
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Feb 3 23:56:07 2010 +0000
    
        If we're dealing with a zero-length array, don't lower to any
        particular size, we just don't know what the length is yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95266 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bce8ccee05dd5340560bc330068e5d6e3602f99
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 3 22:33:17 2010 +0000
    
        This test passes now on ppc darwin; if it doesn't pass
        on some other ppc say something on the list.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95265 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07c1cb52549b6c95ce42c23893bff6764b6e7042
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 3 22:29:02 2010 +0000
    
        This test passes now on ppc darwin, so reenable it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95264 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c09099dadac902f3e672e26c0831d6b2151d779f
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 3 22:24:49 2010 +0000
    
        Debugging is now reenabled on PPC darwin, so reenable
        these tests (they pass).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95263 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 777872cc6a319b8695bd62962aaeb9ebbf3d969e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 21:57:59 2010 +0000
    
        enhance new encoder to support prefixes + RawFrm
        instructions with no operands.  It can now handle
    
        define void @test2() nounwind { ret void }
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95261 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 364fb537e4e3f3c4bb46d15a086b682a36bb3065
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 21:43:43 2010 +0000
    
        set up some infrastructure, some minor cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95260 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a825a3149e613e5e42730fa790f9e39e5a9f635c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 3 21:40:40 2010 +0000
    
        Speculatively disable x86 automatic tail call optimization while we track down a self-hosting issue.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95259 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41aed3c15b65cdec29da235390b3e3cd7b11ad7a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 3 21:39:04 2010 +0000
    
        Make test less fragile
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95258 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd2d81f36803ff0e9f0efdaea13b0276b49d7843
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 21:24:49 2010 +0000
    
        stub out a new X86 encoder, which can be tried with
        -enable-new-x86-encoder until its stable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95256 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1bb768cf0398615a2a6cbc4e98af77eefe960764
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 21:14:33 2010 +0000
    
        rename createX86MCCodeEmitter to more accurately reflect what it creates.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95254 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ae5bc0f4122457a67bbfcfddd89b063803e34e3d
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Wed Feb 3 21:04:42 2010 +0000
    
        Added support for X86 instruction prefixes so llvm-mc can assemble them.  The
        Lock prefix, Repeat string operation prefixes and the Segment override prefixes.
        Also added versions of the move string and store string instructions without the
        repeat prefixes to X86InstrInfo.td. And finally marked the rep versions of
        move/store string records in X86InstrInfo.td as isCodeGenOnly = 1 so tblgen is
        happy building the disassembler files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95252 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2831d5d4aff7323c6526bfa99857dac2bf94295e
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Feb 3 20:08:48 2010 +0000
    
        Emit appropriate expression to find virtual base offset.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95242 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 51ea0cc2933924feb130d52d071e5bdee8ed694b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Wed Feb 3 19:57:19 2010 +0000
    
        Provide interface to identifiy artificial methods.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95240 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b1af53a796f93066a34cdf4cf7b9371aa173c1c3
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Feb 3 19:18:04 2010 +0000
    
        r94686 changed all ModuleProvider parameters to Modules, which made the
        1-argument ExecutionEngine::create(Module*) ambiguous with the signature that
        used to be ExecutionEngine::create(ModuleProvider*, defaulted_params).  Fixed
        by removing the 1-argument create().  Fixes PR6221.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95236 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 819e4ea9858786a3a202642b179f4dffc0cde9c2
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Feb 3 18:49:55 2010 +0000
    
        Make docs less specific about their versions, at Chris's suggestion.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95231 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2402ff84538dadd53e6f13f5ad9dd7c6bdfe0ef0
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 3 18:43:46 2010 +0000
    
        Add llvm_supports_darwin_and_target to DejaGNU as well, I'd almost forgotten it
        ever existed. :)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95230 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 38bec178a30bdb19ad3197fcc0abd6b61fe72ee1
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Feb 3 18:23:23 2010 +0000
    
        Mention the version in the documentation index and link to the 2.6 docs, which
        is what most readers will actually be aiming for.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95229 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0889414038faa0c5c8f1fd28db4391c7a1dfabad
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Feb 3 18:18:30 2010 +0000
    
        llvm-mc: Add --show-inst option, for showing the MCInst inline with the assembly
        output.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95227 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 76fd987a802f86d0ca298812b63552a15b7f174d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Feb 3 17:27:31 2010 +0000
    
        Add "Author Date Id Revision" svn:keyword properties to these files, as
        is done with the other html files in doc, to hopefully keep strings like
        "Last modified" current.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95225 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 66b7149074808f51fd6365b509028ff862a05108
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Feb 3 17:23:56 2010 +0000
    
        Adjust the heuristics used to decide when SROA is likely to be profitable.
        The SRThreshold value makes perfect sense for checking if an entire aggregate
        should be promoted to a scalar integer, but it is not so good for splitting
        an aggregate into its separate elements.  A struct may contain a large embedded
        array along with some scalar fields that would benefit from being split apart
        by SROA.  Even if the total aggregate size is large, it may still be good to
        perform SROA.  Thus, the most important piece of this patch is simply moving
        the aggregate size comparison vs. SRThreshold so that it guards only the
        aggregate promotion.
    
        We have also been checking the number of elements to decide if an aggregate
        should be split up.  The limit of "SRThreshold/4" seemed rather arbitrary,
        and I don't think it's very useful to derive this limit from SRThreshold
        anyway.  I've collected some data showing that the current default limit of
        32 (since SRThreshold defaults to 128) is a reasonable cutoff for struct
        types.  One thing suggested by the data is that distinguishing between structs
        and arrays might be useful.  There are (obviously) a lot more large arrays
        than large structs (as measured by the number of elements and not the total
        size -- a large array inside a struct still counts as a single element given
        the way we do SROA right now).  Out of 8377 arrays where we successfully
        performed SROA while compiling a large set of benchmarks, only 16 of them had
        more than 8 elements.  And, for those 16 arrays, it's not at all clear that
        SROA was actually beneficial.  So, to offset the compile time cost of
        investigating more large structs for SROA, the patch lowers the limit on array
        elements to 8.
    
        This fixes Apple Radar 7563690.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95224 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b692ce605e860ee4d6928eab7586d3cfc2302407
    Author: Garrison Venn <gvenn.cfe.dev at gmail.com>
    Date:   Wed Feb 3 12:00:02 2010 +0000
    
        Repository access test commit
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95221 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b49b883b85ef0c4d56c3343a46bcbea6e4d09a3e
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Wed Feb 3 09:05:21 2010 +0000
    
        Remove redundant declaration.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95213 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b68773962f3210db4e2a3837197737c8eaa6e0ad
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Wed Feb 3 09:04:11 2010 +0000
    
        Add constructors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95212 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1f506f73258301e6cfb4128a3cce5bf93380460a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 06:42:38 2010 +0000
    
        reapply r95206, this time actually delete the code I'm replacing in the third stub case.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95209 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cd96d73fe03dff21c9fe0bf9787fc34e37411e34
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 06:41:18 2010 +0000
    
        revert r95206, it is apparently causing bootstrap failure on i386-darwin9
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95208 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7a1f1f72708b2ff83d4f5ea1df34e652a73e1ba0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 06:28:13 2010 +0000
    
        print instruction encodings with the existing comment facilities,
        so that llvm-mc -show-encoding prints like this:
    
        	hlt                                                 ## encoding: [0xf4]
    
        instead of like this:
    
        	hlt
                             # encoding: [0xf4]
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95207 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b8efb41ad3922cdd022a2eedc8bb5ee098e68b91
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 06:21:16 2010 +0000
    
        make the x86 backend emit darwin stubs through mcstreamer
        instead of textually.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95206 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e550c4e717d5135efe0df7e679ec1e83b7ecc0f9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 06:18:30 2010 +0000
    
        make MachineModuleInfoMachO hold non-const MCSymbol*'s instead
        of const ones.  non-const ones aren't very useful, because you can't
        even, say, emit them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95205 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73cdb9fff3e6e69e6429be28339d10ae0ba78d1a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 05:55:08 2010 +0000
    
        change addPassesToEmitFile to return true on failure instead of its input,
        add -filetype=null for performance testing and remove -filetype=dynlib,
        which isn't planned to be implemented.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95202 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 73b9dcac300a4b4f5814fd628a774ecd2693b639
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 3 03:55:59 2010 +0000
    
        Revert 94937 and move the noreturn check to codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95198 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit acfc18fbad20b8f2e402408648537a463f3cbb57
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Feb 3 03:46:41 2010 +0000
    
        Fixed the disassembler so it accepts multiple
        instructions on a single line.  Also made it a
        bit more forgiving when it reports errors.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95197 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7550a4139b55d976e60bf2025e41e9a76d9bc6df
    Author: John McCall <rjmccall at apple.com>
    Date:   Wed Feb 3 03:42:44 2010 +0000
    
        Make APInt::countLeadingZerosSlowCase() treat the contents of padding bits
        as undefined.  Fixes an assertion in APFloat::toString noticed by Dale.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95196 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4c2d02dcf909a29ad5776b8e320a3e896ec81d91
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Feb 3 03:28:02 2010 +0000
    
        Allow all types of callee's to be tail called. But avoid automatic tailcall if the callee is a result of bitcast to avoid losing necessary zext / sext etc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95195 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7aec47e78037512575e13a5b31b059bb2ea02cf6
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Feb 3 02:11:49 2010 +0000
    
        Reconfigure with autoconf-2.60, and fix autoconf.ac to work with that version.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95191 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5da1e50388dd0ba4a949dd5c559ffb6c9433abc1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:49:49 2010 +0000
    
        don't emit \n's at the start of X86AsmPrinter::runOnMachineFunction,
        .o files don't like that.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95187 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25137eb1159bc82bdf8a9b6c56a3bc1d05febd8a
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:46:05 2010 +0000
    
        privatize a bunch of methods and move \n printing into them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95186 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed2c379e4c3bf39afbdfe08b573f965a62063b45
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:41:03 2010 +0000
    
        rename printMachineInstruction -> EmitInstruction
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95184 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0ee7ca20cd5c9b157c4a1ab9825d6c2f0ba99e96
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 3 01:40:33 2010 +0000
    
        Reapply 95050 with a tweak to check the register class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95183 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dab48b4dbe274d58ffe036848efe3f3de25596eb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:16:28 2010 +0000
    
        print instructions through the mcstreamer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95181 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 74887bc2f8910a00660b4dfe55cbd2a3f5639681
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:15:03 2010 +0000
    
        emit instructions through the streamer.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95180 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5099786d0b6e63912227de6fe21f1921bb8ca311
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:13:25 2010 +0000
    
        Finally eliminate printMCInst and send instructions through
        the streamer.  Demo:
    
        $ cat t.ll
        define i32 @test() nounwind {
          ret i32 42
        }
        $ llc t.ll -o -
        ...
        _test:
        	movl	$42, %eax
        	ret
        $ llc t.ll -o t.o -filetype=obj
        $ otool -tv t.o
        t.o:
        (__TEXT,__text) section
        _test:
        00000000	movl	$0x0000002a,%eax
        00000005	ret
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95179 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f49ae3f1c237f37309d8b8d3c096c902e64156a7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:09:55 2010 +0000
    
        rejigger the world so that EmitInstruction prints the \n at
        the end of the instruction instead of expecting the caller to
        do it.  This currently causes the asm-verbose instruction
        comments to be on the next line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95178 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89ef2563079e2c3e90116949105928b57eca6c97
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 01:00:52 2010 +0000
    
        sink handling of target-independent machine instrs (other
        than DEBUG_VALUE :(  ) into the target indep AsmPrinter.cpp
        file.   This allows elimination of the
        NO_ASM_WRITER_BOILERPLATE hack among other things.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95177 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c38f1118b1fe7c43c8b4fe67dd920d50a574874e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 00:48:53 2010 +0000
    
        make these less sensitive to asm verbose changes by disabling it for them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95175 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a43ec0f4625cfbff87cf1cad3e233d564fb5b79f
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Feb 3 00:36:40 2010 +0000
    
        Print FPImm a less kludgy way; APFloat.toString seems
        to have some problems anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95171 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a7420b3f92907948e325b870253bb2215ee871b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Feb 3 00:33:21 2010 +0000
    
        Fix some comment typos.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95170 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07a7fb4dfc55877789e4ac89e8e15f7eb193a1c0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 00:29:55 2010 +0000
    
        pass an instprinter into the AsmPrinter if it is available.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95168 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b881efd98a3db73235b51fef4a67c8a27ab6c86d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Feb 3 00:22:02 2010 +0000
    
        make any use of the "O" stream in asmprinter print to
        stderr if in filetype=obj mode.  This is a hack, and will
        live until dwarf emission and other random stuff that is
        not yet going through MCStreamer is upgraded.  It only
        impacts filetype=obj mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95166 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23ce2ffb517fec9f017309966c12c1f79a92593e
    Author: Eric Christopher <echristo at apple.com>
    Date:   Wed Feb 3 00:21:58 2010 +0000
    
        Recommit this, looks like it wasn't the cause.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95165 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 455d28b90ab94ef472ed76178182626c18f36424
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Feb 2 23:58:13 2010 +0000
    
        ByVal frame object size should be that of the byval argument, not the size of the type which is just a pointer. This is not known to break stuff but is wrong nevertheless.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95163 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 756d064ed2ce9da3f9ce4667a9bd3dd132312808
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 23:57:42 2010 +0000
    
        Hook up -filetype=obj through the MachO streamer.  Here's a demo:
    
        $ cat t.ll
        @g = global i32 42
        $ llc t.ll -o t.o -filetype=obj
        $ nm t.o
        00000000 D _g
    
        There is still a ton of work left.  Instructions are not being encoded
        yet apparently.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95162 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 21e1c2718234fa1657c0aeeea86b5dece6ce1d74
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Tue Feb 2 23:56:14 2010 +0000
    
        As of r79039, we still try to eliminate the frame pointer on leaf functions,
        even when -disable-fp-elim is specified.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95161 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f7db78a1e125e52592115b0c97029143fae537a
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Feb 2 23:55:14 2010 +0000
    
        Revert 95130.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95160 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dcbc37a6c5942a820f36cf6b66bdc041df414ce7
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 2 23:54:23 2010 +0000
    
        Accept floating point immediates in DEBUG_VALUE.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95159 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc21f9123f818edf24d5b40c526dc02a6a2151a4
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 23:46:47 2010 +0000
    
        AsmParser/X86: Add temporary hack to allow parsing "sal". Eventually we need
        some mechanism for specifying alternative syntaxes, but I'm not sure what form
        that should take yet.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95158 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5f6ea763bf9b8e0dd4f2d3698a5e29e8d2a5017a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 23:46:36 2010 +0000
    
        AsmMatcherEmitter: Use stable_sort when reordering instructions, so that order
        is still deterministic even amongst ambiguous instructions (eventually ambiguous
        match orders will be a hard error, but we aren't there yet).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95157 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 397cebe4eb9631eb73da640399ac802d76a412fb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 23:45:17 2010 +0000
    
        use OwningPtr and factor code better.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95156 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bc40c64e5548d1a6b93613dc50b487820d8e7d91
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 23:37:42 2010 +0000
    
        refactor code so that LLVMTargetMachine creates the asmstreamer and
        mccontext instead of having AsmPrinter do it.  This allows other
        types of MCStreamer's to be passed in.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95155 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d334d70d6b78b9533b875b329aac174ffa922747
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 23:01:31 2010 +0000
    
        Hopefully temporarily revert this.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95154 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b5696b523dbc5c8586a5222d5519fe820ee9424b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:58:13 2010 +0000
    
        simplify getVerboseAsm
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95153 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6b284d094e3805cf4d3dca8fbb91acbad84cf546
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:54:51 2010 +0000
    
        move handling of asm-verbose out of AsmPrinter.cpp into LLVMTargetMachine.cpp with the rest of the command line options.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95152 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e80dc6079eaeaf314617975b9518414674af2cbd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:37:42 2010 +0000
    
        remove dead #include, stupid symlinks.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95150 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d8982c7f29ed5ead56d2b93cfaa728a50a3ed12c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:36:29 2010 +0000
    
        remove the # TAILCALL markers, which was causing the to fail.
        It's unclear if the matcher is nondeterminstic of what here,
        but I'm getting matches without TAILCALL and some other hosts
        are getting matches with it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95149 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8622da039332c3928eb306e8317c8cd1a18d4809
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:31:11 2010 +0000
    
        Remove a bunch of stuff around the edges of the ELF writer.
        Now the only use of the ELF writer is the JIT, which won't be
        easy to fix in the short term. :( :(
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95148 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c66b12bfe23a9469cfe8316dd99204cc94b4b5cf
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 22:29:26 2010 +0000
    
        Reformat my last patch slightly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95147 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 049f71265b0a9dc1bf938c87b9d3c3b076e47835
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:13:21 2010 +0000
    
        tidy some targets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95146 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e3b4b0ba19b5001a565087800fd343cef5be9e39
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 22:10:43 2010 +0000
    
        Re-add strcmp and known size object size checking optimization.
    
        Passed bootstrap and nightly test run here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95145 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ed3cf2287918962e1d0c7a4daf3817fef8700fb0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 22:03:00 2010 +0000
    
        remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95144 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b84851fd840d1bc4d79d696b6e8679d3acbafcda
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 22:00:15 2010 +0000
    
        MCAssembler/Darwin: Add a test (on Darwin) that we assemble a bunch of
        instructions exactly like 'as', and produce equivalent .o files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95143 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 062ed9b30fff92c004912477bb9255722f8835ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:55:58 2010 +0000
    
        detemplatize the ppc code emitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95142 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 669180b467a7ec704fd95d8d26251fbaffe3e1fd
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:52:03 2010 +0000
    
        remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95141 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 050e1f2800742f0fe7dbc4cf416a49bb1f6ac773
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:49:29 2010 +0000
    
        add a definition for ID.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95140 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d19fc96465d615f29b254a2a5bdb5a9a431b6dab
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:48:51 2010 +0000
    
        detemplatize ARM code emitter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95138 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 85186c4e27f082dce8568ce347983481bce4607e
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 21:44:16 2010 +0000
    
        MCAsmParser/X86: Represent absolute memory operands as CodeGen does, with scale
        == 1.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95137 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6776221b04b736cec359aace62501c512d2dc7a1
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 21:44:10 2010 +0000
    
        MCCodeEmitter/X86: Handle tied registers better when converting MCInst ->
        MCMachineInstr. This also fixes handling of tied registers for MRMSrcMem
        instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95136 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df557c865aaa18a337964f7cccf7973b12d9735a
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 21:44:01 2010 +0000
    
        MC/Mach-O: Set SOME_INSTRUCTIONS bit for sections.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95135 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f9365bf724b4308e2bac368a2f29a02fe949ff0f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:38:59 2010 +0000
    
        remove dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95134 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f393cd4958708c36c353f9179ee4f3324edc801c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:35:47 2010 +0000
    
        detemplatize alpha code emission, it is now JIT specific.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95133 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bb4e4e82f29f101e47c0d3fc42be85a82e993cad
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:31:47 2010 +0000
    
        eliminate all the dead addSimpleCodeEmitter implementations.
    
        eliminate random "code emitter" stuff in Alpha, except for
        the JIT path.  Next up, remove the template cruft.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95131 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 32abfae758e1d86d915f1f0e5aec20bea802d4fa
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Feb 2 21:29:10 2010 +0000
    
        Pass callsite return type to TargetLowering::LowerCall and use that to check sibcall eligibility.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95130 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 818ad00b8aa8a10052e77c92c67f69f32b2356ba
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 2 21:11:22 2010 +0000
    
        Make DenseSet's erase pass on the return value rather than swallowing it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95127 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f3adb09481fcde8dd865a5a2fcb8201e3dac7b97
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 2 21:10:27 2010 +0000
    
        Fix function names in comments. Thanks Duncan!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95126 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 138c76eb26da9c0d40033566d2a57f2d5510f1ba
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 21:06:45 2010 +0000
    
        eliminate FileModel::Model, just use CodeGenFileType.  The client
        of the code generator shouldn't care what object format a target
        uses.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95124 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 71f26cdad52675b2b0ef19c349d0f59cd27323ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 20:57:28 2010 +0000
    
        this apparently depends on the host somehow.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95122 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f792e0305f7a44f12e12b65a89c101a43aeb687b
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Tue Feb 2 20:56:02 2010 +0000
    
        XFAIL for PPC Darwin.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95121 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 742d333e513f892dfe2e5d31d3e53008bf3176f7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 20:41:39 2010 +0000
    
        disable this test for now.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95120 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cfe96230d1779423b5f75664be0053fece155600
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 2 20:20:30 2010 +0000
    
        ...and fixed the Makefile.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95119 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 29afd237bdf1618d1f9e82919bb9346b8e8f3833
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 2 20:11:23 2010 +0000
    
        Renamed the ed directory to edis, as suggested
        yesterday.  This eliminates possible confusion
        about what exactly in this directory; the name
        is still short, though.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95118 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 63cb9e7fa1ca3ddf761bf5d1ea3a35711ca16689
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 19:41:23 2010 +0000
    
        remove the remnants of TargetMachOWriterInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95114 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6f6ee467bd3d1dd02b2dd8d95e90d1aad451fafb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 19:38:14 2010 +0000
    
        Add a new top-level MachO.h file for manifest constants, fixing
        a layering violation from MC -> Target.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95113 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 982557e7ea0bb8dda5799323072c0e2ce7c5099d
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Tue Feb 2 19:31:58 2010 +0000
    
        Added t2BFI (Bitfield Insert) entry for disassembler, with blank pattern field.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95112 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 368dfe8381c0b8bf62cd2f7c279af5917ceb24eb
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 19:23:55 2010 +0000
    
        remove PPCMachOWriterInfo.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95111 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b860df49af8a5ce36983931fe3c3473e318d037
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 19:14:27 2010 +0000
    
        eliminate all forms of addPassesToEmitMachineCode except
        the one used by the JIT.  Remove all forms of
        addPassesToEmitFileFinish except the one used by the static
        code generator.  Inline the remaining version of
        addPassesToEmitFileFinish into its only caller.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95109 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c74cb6dfb1a37f36bf306ce88282c8478c40f062
    Author: Kevin Enderby <enderby at apple.com>
    Date:   Tue Feb 2 19:05:57 2010 +0000
    
        Added another version of the X86 assembler matcher test case.
        This test case is different subset of the full auto generated test case, and a
        larger subset that is in x86_32-bit.s (that set will encode correctly).  These
        instructions can pass though llvm-mc as it were a logical cat(1) and then
        reassemble to the same instruction.  It is useful as we bring up the parser and
        matcher so we don't break things that currently work.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95107 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf50076b6922bd5761ce8f827c44fd5395329dc7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 19:03:39 2010 +0000
    
        remove dead code, we're requesting TargetMachine::AssemblyFile here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95105 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8184d24c93887f1d3eb9cbda7d7e8fa19e8c6344
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 2 18:52:56 2010 +0000
    
        Test revert 95050; there's a good chance it's causing
        buildbot failure.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95103 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 545b1b1a9d368259e05f745840ebf091d0cad704
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 18:44:12 2010 +0000
    
        Inline addAssemblyEmitter into its one real caller and delete
        the -print-emitted-asm option.  The JIT shouldn't have to pull
        in the asmprinter.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95100 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit abebd2f6102df6dbf12b44e147132ca029a36018
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Tue Feb 2 12:53:04 2010 +0000
    
        Adding missing methods for creating Add, Mul, Neg and Sub with NUW.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95086 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8142256571dbba74bf0d1dc5dc3c696096dd185c
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 07:05:31 2010 +0000
    
        Return value on every path.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95075 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d4c74a63f9c6569b68b0c4dd086f3fbf2c8c5add
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 06:33:32 2010 +0000
    
        simplify code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95074 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 84aa735e88bb3f357f00f9a83f9e4d7bfb1440d2
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 06:22:08 2010 +0000
    
        More logic correction: RemoveOverlap should always create new tree. Add a
        parameter to record whether changes actually happened.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95073 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9eeca10a528f87882fcc1bdeeb267a008b97296c
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 05:23:23 2010 +0000
    
        Add a lookup method to the IntervalMap. The difference from the original
        lookup is that if the lookup key is contained in the key, we return the data.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95070 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 32b86302fc0e3568bfdd453919443e7b3589d9d0
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Feb 2 03:47:27 2010 +0000
    
        Apparently gdb is not amused by empty lines in pubtypes section.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95064 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b36a8b11f1b7e8b451c805c554b357f9924d98d2
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Feb 2 03:37:03 2010 +0000
    
        NULL terminate name in pubtypes sections.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95062 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 44e7bce35179730d599b3ae35096df04b0e79553
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 02:43:51 2010 +0000
    
        don't turn (A & (C0?-1:0)) | (B & ~(C0?-1:0)) ->  C0 ? A : B
        for vectors.  Codegen is generating awful code or segfaulting
        in various cases (e.g. PR6204).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95058 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 06e4b56331b271118d56da0e64ea49ec9a3ecab2
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 02:40:56 2010 +0000
    
        Fix a bunch of errors in the old logic.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95056 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9ba1ba9fcd7fcb040dcb3d216183d07dce285d9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 02:26:54 2010 +0000
    
        fix a crash in loop unswitch on a loop invariant vector condition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95055 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8a64e3edd146c918912c07ea7cf95f1e103dcc88
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Feb 2 02:23:37 2010 +0000
    
        remove an unreduced testcase, rename another.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95054 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f03020966b3492b238fe1262c062e7a7d9fc58b4
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Feb 2 02:22:50 2010 +0000
    
        Perform sibcall in some cases when arguments are passes memory. Look for cases
        where callee's arguments are already in the caller's own caller's stack and
        they line up perfectly. e.g.
    
        extern int foo(int a, int b, int c);
    
        int bar(int a, int b, int c) {
          return foo(a, b, c);
        }
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95053 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8e0bbfc241381fdc3c8bdfa0c9e93b8b49d8f2e
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 2 02:18:20 2010 +0000
    
        Removed an unnecessary class from the EDDisassembler
        implementation.  Also made sure that the register maps
        were created during disassembler initialization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95051 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c0d238a0dbb716040b1058342db54fe4eb7e652e
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Tue Feb 2 02:08:02 2010 +0000
    
        Make local RA smarter about reusing input register of a copy
        as output.  Needed for (functional) correctness in inline asm,
        and should be generally beneficial.  7361612.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95050 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e078aea60472fd4dcb96170350fe942c01f830af
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Tue Feb 2 01:57:01 2010 +0000
    
        11.8p1: A nested class is a member and as such has the same access rights as
        any other member.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95047 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 03cd7a89d7f9c3e7114d25a513da4f15ba4e2dbf
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 2 01:44:02 2010 +0000
    
        LangRef.html says that inttoptr and ptrtoint always use zero-extension
        when the cast is extending.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95046 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 94974afa225f916cc1fc5d9aa7aa6b373cb235d9
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 2 01:41:39 2010 +0000
    
        Factor out alignof expression folding into a separate function and
        generalize it to handle more cases.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95045 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c45b3aa1897a99f74fbb73038c8fd2995eb250bc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Feb 2 01:38:49 2010 +0000
    
        Various code simplifications.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95044 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6eb9d10f94d819a320f61f8c961643758515dd50
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Tue Feb 2 01:12:20 2010 +0000
    
        Update CMake.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95041 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d7e49f3f6b670e14a3ee8c4c6e5c72474e323d6a
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 00:51:45 2010 +0000
    
        Don't need to check the last argument since it'll always be bool. We also
        don't use TargetData here.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95040 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3b60eab75e05595d7ec77de8b4f9bfaee8f3249d
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 00:13:06 2010 +0000
    
        More indentation/tabification fixes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95036 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4b27f221be15788393f00a22a602ebf169801bf1
    Author: Eric Christopher <echristo at apple.com>
    Date:   Tue Feb 2 00:06:55 2010 +0000
    
        Untabify previous commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95035 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ac567ab867a93e8ce46f5b56228314bedfcceba5
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Tue Feb 2 00:04:46 2010 +0000
    
        Changed to Chris Lattner's suggested approach, which
        merely stubs out the blocks-based disassembly functions
        if the library wasn't built with blocks, which allows a
        constant .exports file and also properly deals with
        situations in which the compiler used to build a client
        is different from the compiler used to build the library.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95034 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 61aac7ef2bc97f1c04f0d7e826df89ffe4bb2559
    Author: Nate Begeman <natebegeman at mac.com>
    Date:   Mon Feb 1 23:56:58 2010 +0000
    
        Kill the Mach-O writer, and temporarily make filetype=obj an error.
        The MCStreamer based assemblers will take over for this functionality.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95033 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3617f6aebcdb06b1465ba0165f9bc35d7207bbf7
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 1 23:27:57 2010 +0000
    
        Fix for builds with separate source and build
        directories (like, oh, say, any multistage build)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95028 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9e99517c5996acc9ca1b774275515fea7fcf1525
    Author: Eric Christopher <echristo at apple.com>
    Date:   Mon Feb 1 23:25:03 2010 +0000
    
        Formatting.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95027 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a5f31df635d0c8d65f19d0cc32f6c3476836cc5
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Mon Feb 1 23:06:04 2010 +0000
    
        MOVi16 should also be marked as a UnaryDP instruction, i.e., it doesn't have a
        Rn operand.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95025 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a9df81d4d00aea149cb8de9195b7322e3361200
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 1 23:01:38 2010 +0000
    
        Updated to use the proper .exports file for the
        target platform, depending on whether the target
        supports the blocks API or not
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95024 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d89e3fb6c3baedce21438f3cc3032aacb49a9088
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Mon Feb 1 22:51:23 2010 +0000
    
        Add "dump" method to IVUsersOneStride.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95022 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aaaed409c8cfd5e9dbaa694a30d2b0a7500c595c
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Feb 1 22:46:05 2010 +0000
    
        Testcase for 94996 (PR 6157)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95021 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 17e9fca94d44eb1bec5b7a39be5f2e1e63655773
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Feb 1 22:40:09 2010 +0000
    
        Fix PR6196. GV callee may not be a function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95017 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba34ec2cabc28cde10a7ada59c5b52de0b4fe27c
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Feb 1 22:32:42 2010 +0000
    
        Add test case for 95013.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95014 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 50a68ccb7108932f75967a0036a15638cbec3cd1
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Mon Feb 1 22:15:09 2010 +0000
    
        Improve EXTRACT_VECTOR_ELT patch based on comments from Duncan
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95012 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 61e230d031189201840a17be52d44fa73e66067e
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 1 21:57:50 2010 +0000
    
        Rollback on including blocks functionality in .exports
        because some platforms don't support blocks and then
        break because the symbols aren't present
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95011 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a493aa499b18bb534004ab850caba29042d44cd0
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Feb 1 21:17:14 2010 +0000
    
        Add an option to GVN to remove all partially redundant loads.  This is currently
        disabled by default.  This divides the existing load PRE code into 2 phases:
        first it checks that it is safe to move the load to each of the predecessors
        where it is unavailable, and then if it is safe, the code is changed to move
        the load.  Radar 7571861.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95007 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c4350840a22dd22fcae29ed124309d46ca107a3e
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Feb 1 20:57:35 2010 +0000
    
        Do an early exit when the result is known cheaply.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95002 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e985e1fe58b689501fcd07947a0b9e288c275951
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 20:48:08 2010 +0000
    
        eliminate a bunch of pointless LLVMContext arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95001 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8abd8f0541a289c17536761bd5e236cf1fa280f8
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Mon Feb 1 20:42:02 2010 +0000
    
        Fix typo "of" -> "or" and change the way a line was formatted to fit
        into 80 columns to match my artistic preferences.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95000 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4f9385fc14a2cc39dc912c1f425a73f27d78f2a9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 20:04:40 2010 +0000
    
        fix PR6195, a bug constant folding scalar -> vector compares.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94997 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e1c3887e98f2b62b9a25b308619217fe19c41459
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Mon Feb 1 19:54:53 2010 +0000
    
        fix PR 6157.  Testcase pending.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94996 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 356faaae39ef1499382bf50573537fa2bc463237
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 19:54:45 2010 +0000
    
        cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94995 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 03ad0a8bb6f3be3a2185ee63b83cbae9bf18f2b2
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 19:35:08 2010 +0000
    
        fix PR6197 - infinite recursion in ipsccp due to block addresses
    
        evaluateICmpRelation wasn't handling blockaddress.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94993 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4762dd39a17dc7e9700bd0308ae4125be085a2c3
    Author: Mon P Wang <wangmp at apple.com>
    Date:   Mon Feb 1 19:03:18 2010 +0000
    
        Fixed a couple of optimization with EXTRACT_VECTOR_ELT that assumes the result
        type is the same as the element type of the vector.  EXTRACT_VECTOR_ELT can
        be used to extended the width of an integer type.  This fixes a bug for
        Generic/vector-casts.ll on a ppc750.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94990 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 32ec1a5d49759636247ef24827f42e9c66c026f3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 1 19:00:32 2010 +0000
    
        Update this test for a trivial register allocation difference.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94989 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 45f1643e0c6fe92ec35ed81d12386372f14be6e3
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 1 18:27:38 2010 +0000
    
        Generalize target-independent folding rules for sizeof to handle more
        cases, and implement target-independent folding rules for alignof and
        offsetof. Also, reassociate reassociative operators when it leads to
        more folding.
    
        Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
        arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
        to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
    
        Make the target-dependent folder promote GEP array indices to
        pointer-sized integers, to make implicit casting explicit and exposed
        to subsequent folding.
    
        And add a bunch of testcases for this new functionality, and a bunch
        of related existing functionality.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94987 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2091359832897a7ba7769eeebf6e0ebc67c83867
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 18:11:34 2010 +0000
    
        fix rdar://7590304, a miscompilation of objc apps on arm.  The caller
        of objc message send was getting marked arm_apcscc, but the prototype
        isn't.  This is fine at runtime because objcmsgsend is implemented in
        assembly.  Only turn a mismatched caller and callee into 'unreachable'
        if the callee is a definition.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94986 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e82dad6d1c3606ee017379967e31c385b4eb5750
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Mon Feb 1 18:04:58 2010 +0000
    
        fix rdar://7590304, an infinite loop in instcombine.  In the invoke
        case, instcombine can't zap the invoke for fear of changing the CFG.
        However, we have to do something to prevent the next iteration of
        instcombine from inserting another store -> undef before the invoke
        thereby getting into infinite iteration between dead store elim and
        store insertion.
    
        Just zap the callee to null, which will prevent the next iteration
        from doing anything.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94985 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 14cf1435c8c2e5f774127a1234c7f76079435e26
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Mon Feb 1 17:41:44 2010 +0000
    
        Fix pr6198 by moving the isSized() check to an outer conditional.
        The testcase from pr6198 does not crash for me -- I don't know what's up with
        that -- so I'm not adding it to the tests.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94984 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48f174e79a252d99bdc8aee2c821f6871dbbe0bc
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 1 16:38:14 2010 +0000
    
        Add a getNUWMul function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94982 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57c194af6e3725787f73ae1ffeb752906914ca5d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Mon Feb 1 16:37:38 2010 +0000
    
        Add a generalized form of ConstantExpr::getOffsetOf which works for
        array types as well as struct types, and which accepts arbitrary
        Constant indicies.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94981 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7269641c424bd8d0e0f6c22f6402521bd8dfad2a
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Mon Feb 1 12:16:39 2010 +0000
    
        MulOp is actually a Mips specific node, so do the match using Opcode. This fixes PR6192
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94977 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9b5cef72a83cda3fc651e07c92fab7e83153333c
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Mon Feb 1 10:43:31 2010 +0000
    
        Add an immutable interval map, prepared to be used by flat memory model
        in the analyzer. WIP.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94976 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c30767bed5a692acf43f4bb07f82e12e593a933
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 1 09:02:24 2010 +0000
    
        Whoops, left some debugging code in that broke
        a buildbot.  Removed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94975 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 95ba688843344ba4f811127dd4f7904f3c6cbcb0
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Mon Feb 1 08:49:35 2010 +0000
    
        Added the enhanced disassembly library's implementation and
        fleshed out the .exports file.  I still have to fix several
        details of operand parsing, but the basic functionality is
        there and usable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94974 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0790256d8b2b86223eae754e8659630ec606ac88
    Author: Zhongxing Xu <xuzhongxing at gmail.com>
    Date:   Mon Feb 1 07:32:52 2010 +0000
    
        Simplify code. We can compare TNew with T in one batch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94973 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ee5eee2923fa2b9b8c8b68559bce92b2838c74c1
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Mon Feb 1 02:13:39 2010 +0000
    
        Undo r94946 now all the tests are passing again.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94970 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4cd00632df8334a9fcbce4f01160030cdb6e7ba4
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Mon Feb 1 02:03:24 2010 +0000
    
        Fix stack size bug while using o32 abi
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94969 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fb62f2f3ab0bfdbf5598e586964736db2ad55da4
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Sun Jan 31 11:22:28 2010 +0000
    
        For MVNr and MVNs, we need to set Inst{25} = 0 so as not to confuse the decoder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94955 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4489953fa201178eac3af8e19f04f58bdb3e2b28
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Jan 31 07:28:44 2010 +0000
    
        Change TAILJMP's to be varargs and transfer implicit uses over from TCRETURN's. Otherwise the missing uses can make post-regalloc scheduling do bad things. This fixes 403.gcc.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94950 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cf3892e990dfce6790ff57e7dccc9e5fb829b258
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Jan 31 07:27:31 2010 +0000
    
        Fix a missing check from my last commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94949 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a04696cdaaaebb88550de183439178e812c85e7f
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Jan 31 06:44:49 2010 +0000
    
        Avoid recursive sibcall's.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94946 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cf83c91bbd530f19054c51da0758e41a189a0251
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sun Jan 31 04:55:32 2010 +0000
    
        Remove a completed item, add a couple new ones.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94945 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8867dbd7ba85a07d11267561f58db7675c2af561
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sun Jan 31 04:40:45 2010 +0000
    
        Remove test which is no longer relevant.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94944 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5714a01e760ee1591da125c9fb4dfdd2df4a051d
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sun Jan 31 04:29:12 2010 +0000
    
        Simplify/generalize the xor+add->sign-extend instcombine.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94943 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7fa5774587e56d1071348acdadd7569a4fa9092
    Author: Eli Friedman <eli.friedman at gmail.com>
    Date:   Sun Jan 31 02:30:23 2010 +0000
    
        Add a small transform: transform -(X<<Y) to (-X<<Y) when the shift has a single
        use and X is free to negate.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94941 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c91dadab87396596808a85161727d7d966162c93
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Sun Jan 31 02:28:18 2010 +0000
    
        Moved InstallLexer() from the X86-specific AsmLexer
        to the TargetAsmLexer class so that clients can
        actually use the TargetAsmLexer they get from a
        Target.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94940 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5d40692b1b30ecd409a6bdc3d7bf4c433bf634ae
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sun Jan 31 00:59:31 2010 +0000
    
        Do not mark no-return calls tail calls. It'll screw up special calls like longjmp and it doesn't make much sense for performance reason. If my logic is faulty, please let me know.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94937 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc3179860ceed01b85071c63c3686ad7dbbb3ec9
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Sat Jan 30 18:32:07 2010 +0000
    
        Fix PR6144. Reload GP before the emission of CALLSEQ_END to guarantee the right reload order
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94915 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5c4cf9d40bb5d64470e410317fbad462a318a0a0
    Author: Bruno Cardoso Lopes <bruno.cardoso at gmail.com>
    Date:   Sat Jan 30 18:29:19 2010 +0000
    
        Fix mov.d out register by using the FFR register class directly
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94914 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 592196d5d01f44ad2929c06e0d2cda9b57d09168
    Author: Anton Korobeynikov <asl at math.spbu.ru>
    Date:   Sat Jan 30 14:08:12 2010 +0000
    
        Fix a gross typo: ARMv6+ may or may not support unaligned memory operations.
        Even if they are suported by the core, they can be disabled
        (this is just a configuration bit inside some register).
    
        Allow unaligned memops on darwin and conservatively disallow them otherwise.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94889 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b79f9b62f198d1ae452da23bf7d5a9aea486c0c9
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Jan 30 04:42:39 2010 +0000
    
        Check alignment of loads when deciding whether it is safe to execute them
        unconditionally.  Besides checking the offset, also check that the underlying
        object is aligned as much as the load itself.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94875 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0108c91d2ac699a4437e0ab1b1e1c5d15bc4f603
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Jan 30 01:22:00 2010 +0000
    
        Allow more tailcall optimization: calls with inputs that are all passed in registers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94873 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bbbdf911a8e061b5bb0da8236af38ff309c3fdd8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Sat Jan 30 01:16:15 2010 +0000
    
        Don't forget to transfer target flag when inserting a tailcall instruction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94872 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a24e103806f5cc8150d95a3d17c34f7b15a2ac54
    Author: Devang Patel <dpatel at apple.com>
    Date:   Sat Jan 30 01:08:30 2010 +0000
    
        Emit declaration DIE for the class static variables.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94870 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 827c600e36786c4cb41fae2e9b41a829c1a52c75
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Jan 30 01:02:48 2010 +0000
    
        MC/X86 AsmParser: Handle absolute memory operands correctly. We were doing
        something totally broken and parsing them as immediates, but the .td file also
        had the wrong match class so things sortof worked. Except, that is, that we
        would parse
          movl $0, %eax
        as
          movl 0, %eax
        Feel free to guess how well that worked.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94869 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23b9b1e20e2939687c01c26435ae27f9af28a201
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Jan 30 01:02:37 2010 +0000
    
        AsmMatcher: Create operand classes before use, apparently records aren't visited
        in the order they were declared.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94868 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5b5007f7ee0140eebeb7a41adcc32fead57e4569
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Sat Jan 30 00:57:47 2010 +0000
    
        Fix a case where debug_value could affect codegen.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94866 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit da88458ae2e4520e5a57b8e4a1e478c25a2d231f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Jan 30 00:41:10 2010 +0000
    
        Use more specific types to avoid casts.  No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94863 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df2bdbad3c28a03d5ac73599f1bf101fb2a3f71c
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Sat Jan 30 00:40:23 2010 +0000
    
        Remove ARM-specific calling convention from this test.  Target data is
        needed for this test, but otherwise, there's nothing ARM-specific about
        it and no need to specify the calling convention.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94862 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9a758cd460d0f29f9a7dfe3a0c15492f40c7b86b
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Jan 30 00:24:12 2010 +0000
    
        X86.td: Refactor to bring operands that use print_pcrel_imm together.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94861 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 23f4491b48f762af658aa54cd0a1edaee9339ea9
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Jan 30 00:24:06 2010 +0000
    
        FileCheck: When looking for "possible matches", only compare against the prefix
        line. Turns out edit_distance can be slow if the string we are scanning for
        happens to be quite large.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94860 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f70705634a6eb03429eddab5e4cb6e43dfaa3fde
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Sat Jan 30 00:24:00 2010 +0000
    
        AsmMatcher/X86: Separate out sublass for memory operands that have no segment
        register, and use to cleanup a FIXME in X86AsmParser.cpp.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94859 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 943e7d9fa17371889cab42815cd698d942d81b73
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Fri Jan 29 23:54:14 2010 +0000
    
        Keep iterating over all uses when meeting a phi node in AllUsesOfValueWillTrapIfNull().
    
        This bug was exposed by my inliner cost changes in r94615, and caused failures
        of lencod on most architectures when building with LTO.
    
        This patch fixes lencod and 464.h264ref on x86-64 (and likely others).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94858 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b50cf92f36da35ca33074783b5ecc1a556657c0
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Jan 29 23:32:40 2010 +0000
    
        MC/X86: Add a nice X86 assembler matcher test case from Kevin Enderby.
         - This test case is auto generated, and has been verified to round-trip
           correctly through llvm-mc by checking the assembled .o file before and after
           piping through llvm-mc. It will be extended over time as the matcher grows
           support for more instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94857 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 92411a8e37a8f1d29aec9d0531ff4ec2895f875f
    Author: Johnny Chen <johnny.chen at apple.com>
    Date:   Fri Jan 29 23:21:10 2010 +0000
    
        Modified encoding bits specification for VFP instructions.  In particular, the D
        bit (Inst{22}) and the M bit (Inst{5}) should be left unspecified.  For binary
        format instructions, Inst{6} and Inst{4} need to specified for proper decodings.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94855 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6069dd121294d238437705678bfef24e7cfd012
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Jan 29 23:12:36 2010 +0000
    
        Print a comment next to "materializable" global values, to distinguish
        them from values that are not actually defined in the module.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94854 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e7b6d2c6a6443d73a3f3f6d31faf4d57ec10655b
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Jan 29 23:05:56 2010 +0000
    
        PPC is not ready for sibcall optimization.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94853 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9aa401e14dca8c65d2fb4c82c43b1d9cbe8f1fbd
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Jan 29 22:39:21 2010 +0000
    
        Preserve load alignment in instcombine transformations.  I've been unable to
        create a testcase where this matters.  The select+load transformation only
        occurs when isSafeToLoadUnconditionally is true, and in those situations,
        instcombine also changes the underlying objects to be aligned.  This seems
        like a good idea regardless, and I've verified that it doesn't pessimize
        the subsequent realignment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94850 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ccfec8175249ecc3b926755db90e916cfd806583
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Jan 29 21:57:46 2010 +0000
    
        Minor code cleanup.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94848 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 833de98561fcf7f668d1a176eeee80f0c871b358
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Jan 29 21:55:16 2010 +0000
    
        Skip whitespace when looking for a potential intended match.
        Before:
    
        <stdin>:94:1: note: possible intended match here
         movsd 4096(%rsi), %xmm0
        ^
    
        After:
        <stdin>:94:2: note: possible intended match here
         movsd 4096(%rsi), %xmm0
         ^
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94847 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7786f1bdb609cac7e31b20efbcd2050674f0eee6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Jan 29 21:53:18 2010 +0000
    
        Fix the position of the caret in the FileCheck error message.
        Before:
    
        test/CodeGen/X86/lsr-reuse.ll:52:34: error: expected string not found in input
        ; CHECK: movsd -2048(%rsi), %xmm0
                                         ^
    
        After:
    
        test/CodeGen/X86/lsr-reuse.ll:52:10: error: expected string not found in input
        ; CHECK: movsd -2048(%rsi), %xmm0
                 ^
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94846 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e27257da571ae4897b9367dedb358da00d6c9b9
    Author: Junjie Gu <jgu222 at gmail.com>
    Date:   Fri Jan 29 21:34:26 2010 +0000
    
        Make sure the size is doubled (not 4x).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94845 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 385d68b0ed46d8896662325493a245d1ec8c0dce
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Jan 29 21:21:44 2010 +0000
    
        Removed symbols from .exports that are not yet in
        the library.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94844 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08ad58149cf43b916c44c4710c7fa27154285cca
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Fri Jan 29 21:21:28 2010 +0000
    
        Add assertion to humor the paranoid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94843 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cb265567b7b4d8db906ca74b73235a4f5bdec1c6
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Jan 29 21:19:19 2010 +0000
    
        We were not writing bitcode for function-local metadata whose operands have been erased (making it not have any more function-local operands)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94842 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0521734588adfaec93ccf7dffba127b014c3658
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Jan 29 21:16:24 2010 +0000
    
        Revert my last couple of patches. They appear to have broken bison.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94841 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2e0df3849e3ab8909485c1a17fa9c5cdea6269da
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Jan 29 21:11:04 2010 +0000
    
        Rename two IRReader.h functions to indicate that they return a Module
        that loads its contents lazily from bitcode.  I think these are the
        only remaining mis-named functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94840 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 30f1709250c8f855ac512cac5f0d806097957d60
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Jan 29 20:34:28 2010 +0000
    
        Use uint64_t instead of unsigned for offsets and sizes.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94835 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2fea743797a4779b62b14d78a6537d59247e1e38
    Author: Dan Gohman <gohman at apple.com>
    Date:   Fri Jan 29 19:43:48 2010 +0000
    
        Add svn:ignore properties.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94833 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 5575c54de367c1991dde5e53127c3adf0fb75def
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Fri Jan 29 19:19:08 2010 +0000
    
        Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
        indices are safe if the result is known to be within the bounds of the
        underlying object.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94829 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e472109fe15b02c2f6b264df4546367e9320f10f
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Fri Jan 29 19:10:38 2010 +0000
    
        Belatedly document r85295 and r85330.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94825 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d839e1a9792eff26a79c8baef8e1b34ab61b24bb
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Jan 29 18:34:58 2010 +0000
    
        Add size and location info in DW_TAG_class_type descriptor.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94822 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1126e04037ca1207fa56151f6c9c06fda8cfe1cc
    Author: Devang Patel <dpatel at apple.com>
    Date:   Fri Jan 29 18:30:57 2010 +0000
    
        Before inserting   llvm.dbg.declare intrinsic at the end of a basic block, check whether the basic block has a terminator or not.
        This API is used by clang and the test case is test/CodeGen/debug-info-crash.c in clang module.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94820 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 289c0564c911670f5c4e1826420e3fd820f582c0
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Jan 29 15:19:06 2010 +0000
    
        Fix MSVC build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94809 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2b58d509ef83ef2d0c14f0d206fd39354a2ddc62
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Jan 29 14:42:22 2010 +0000
    
        Convert some users of ftostr to raw_ostream.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94808 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b3c938a44e57ec75cbea9c4cf003a0016cfbd56
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Fri Jan 29 14:40:33 2010 +0000
    
        Use llvm::format instead of ftostr (which just calls sprintf).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94807 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 273c038fca6dd89a242c42978f922d69770bd407
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Jan 29 09:45:26 2010 +0000
    
        Change the SREM case to match the logic in the IR version ComputeMaskedBits.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94805 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4aa47bd2b13b887593399ebb4a8c18555e08ca13
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Fri Jan 29 06:45:59 2010 +0000
    
        Catch more trivial tail call opportunities: no inputs and output types match.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94804 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a1deef01c49f252e0a4145459ec10c535637130e
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Jan 29 06:18:46 2010 +0000
    
        Having RHSKnownZero and RHSKnownOne be alternative names for KnownZero and KnownOne
        (via APInt &RHSKnownZero = KnownZero, etc) seems dangerous and confusing to me: it
        is easy not to notice this, and then wonder why KnownZero/RHSKnownZero changed
        underneath you when you modified RHSKnownZero/KnownZero etc.  So get rid of this.
        No intended functionality change (tested with "make check" + llvm-gcc bootstrap).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94802 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c852af1cafe657e4897e153d3a2bc10505b990ef
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Fri Jan 29 06:18:37 2010 +0000
    
        It looks like the changes to the SRem logic of SimplifyDemandedUseBits
        (fix for PR6165) are needed here too.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94801 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8c57220a2ef346d8d4cbc40d5533d3c9f094424c
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Fri Jan 29 03:22:19 2010 +0000
    
        FileCheck: Switch "possible match" calculation to use StringRef::edit_distance.
         - Thanks Doug, who is obviously less lazy than me!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94795 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8413ab7835e70bbd458a93be8239f1011e073031
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Jan 29 01:37:11 2010 +0000
    
        Make strcpy_chk lower to strcpy if we have a safe size.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94783 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c94c6dfca03fbe5ba2173cd79fb4e7d902d7ba90
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Jan 29 01:34:29 2010 +0000
    
        Quick fix to make the header file for the enhanced
        disassembly information have a better comment (and
        better guard macros).
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94781 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 442bd4e292c9bbfe2f34b0fc53f9e34d7fde07be
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Jan 29 01:30:01 2010 +0000
    
        Added a bare-bones Makefile to build the enhanced disassembly
        library as a static and a shared library.  Added dependencies
        so the target-specific enhanced disassembly info tables are
        built before the library.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94780 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 133eb9c346127f28c89966a9671de0ca96f629fe
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Fri Jan 29 01:10:55 2010 +0000
    
        Recognize 'add_executable' when analyzing CMake files.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94777 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1aa2f8adbb477ffa51fb521676a2fd1293ba8143
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Fri Jan 29 01:10:25 2010 +0000
    
        Update CMake build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94776 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 77fe12bf626a284d889283033322e1d1d3877454
    Author: Eric Christopher <echristo at apple.com>
    Date:   Fri Jan 29 01:09:57 2010 +0000
    
        Add constant support to object size handling and remove default
        lowering. We'll either figure it out, or not and be lowered by
        SelectionDAGBuild.
    
        Add test.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94775 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f495802f3e81462b29e23c5e83902d3dca5f0fea
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Jan 29 00:52:43 2010 +0000
    
        Generic reformatting and comment fixing. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94771 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 93bff173ea147f2b516677828f1f79edb171c29e
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Fri Jan 29 00:27:39 2010 +0000
    
        Add newline to debugging output, and fix some grammar-os in comment.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94765 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 323f7068c889952ac6a2700ef08c1af6ba3f9bc3
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Fri Jan 29 00:21:04 2010 +0000
    
        Added a custom TableGen backend to support the
        enhanced disassembler, and the necessary makefile
        rules to build the table for X86.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94764 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 48539b5d4d4e6e17cf58179b2d2d6fc6e1f7c1d9
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Fri Jan 29 00:01:35 2010 +0000
    
        mem2reg erases the dbg.declare intrinsics that it converts to dbg.val intrinsics
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94763 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68d271623713a16ffe62a60edf32d7e3b6c2f275
    Author: Bill Wendling <isanbard at gmail.com>
    Date:   Thu Jan 28 21:51:40 2010 +0000
    
        Assign the ordering of SDNodes in a much less intrusive fashion. After the
        "visit*" method is called, take the newly created nodes, walk them in a DFS
        fashion, and if they don't have an ordering set, then give it one.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94757 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1a12c56badd0733754baee7f7921b495509f9e53
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Thu Jan 28 18:19:36 2010 +0000
    
        Support some more options...
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94752 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a64c74890ef37c607b3f1742abb5b1af65342afa
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Jan 28 18:08:26 2010 +0000
    
        Remove the folding rule
          getelementptr (i8* inttoptr (i64 1 to i8*), i32 -1)
          to
          inttoptr (i64 0 to i8*)
        from the VMCore constant folder. It didn't handle sign-extension properly
        in the case where the source integer is smaller than a pointer size. And,
        it relied on an assumption about sizeof(i8).
    
        The Analysis constant folder still folds these kinds of things; it has
        access to TargetData, so it can do them right.
    
        Add a testcase which tests that the VMCore constant folder doesn't
        miscompile this, and that the Analysis folder does fold it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94750 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 078a4063e384d3a21a14bc1af4c88aeacd4a1f72
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Thu Jan 28 18:04:38 2010 +0000
    
        Replace strcpy with memcpy when we have the length around anyway.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94746 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 41d0a0c1b36033c45a5fa74ec353842b2244b22a
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Thu Jan 28 17:22:42 2010 +0000
    
        Fix PR6165.  The bug was that LHSKnownZero was being and'd with DemandedMask
        when it should have been and'd with LowBits.  Fix that and while there beef
        up the logic in the case of a negative LHS.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94745 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe53061bea27c4b222c548cf2b0e4428134e8bf0
    Author: Douglas Gregor <doug.gregor at gmail.com>
    Date:   Thu Jan 28 06:42:08 2010 +0000
    
        Add llvm::Program::ChangeStderrToBinary().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94743 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 08b017ea9bd8cc8471b6dbd151402b73606f9057
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Jan 28 06:32:46 2010 +0000
    
        Check Type::isSized before calling ScalarEvolution::getAllocSizeExpr,
        rather than after.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94742 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89c402f35edafd7c2178d4ee99644669abd476a8
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 06:22:43 2010 +0000
    
        convert the last 3 targets to use EmitFunctionBody() now that
        it has before/end body hooks.
    
         lib/Target/Alpha/AsmPrinter/AlphaAsmPrinter.cpp |   49 ++-----------
         lib/Target/Mips/AsmPrinter/MipsAsmPrinter.cpp   |   87 ++++++------------------
         lib/Target/XCore/AsmPrinter/XCoreAsmPrinter.cpp |   56 +++------------
         test/CodeGen/XCore/ashr.ll                      |    2
         4 files changed, 48 insertions(+), 146 deletions(-)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94741 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca73d86620334381948d7d3d713e1b31701dec86
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Jan 28 02:43:22 2010 +0000
    
        Make getAlignOf return an i64, for consistency with getSizeOf and
        getOffsetOf, and remove the comment about assuming i8 is byte-aligned,
        which is no longer applicable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94738 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9ebe37839d32ce786dd7140bd559df444ac772d
    Author: Dan Gohman <gohman at apple.com>
    Date:   Thu Jan 28 02:15:55 2010 +0000
    
        Remove SCEVAllocSizeExpr and SCEVFieldOffsetExpr, and in their place
        use plain SCEVUnknowns with ConstantExpr::getSizeOf and
        ConstantExpr::getOffsetOf constants. This eliminates a bunch of
        special-case code.
    
        Also add code for pattern-matching these expressions, for clients that
        want to recognize them.
    
        Move ScalarEvolution's logic for expanding array and vector sizeof
        expressions into an element count times the element size, to expose
        the multiplication to subsequent folding, into the regular constant
        folder.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94737 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 117728fed8d10c94cf56f9cb5aa2d6e93f5143d1
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:58:58 2010 +0000
    
        add target hooks for emitting random gunk before and after the function body.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94732 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0e64765c09c0f900518c9db8d1d116a1a7180e9d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Thu Jan 28 01:57:22 2010 +0000
    
        Fix a bug introduced by r94490 where it created a X86ISD::CMP whose output type is different from its inputs.
        This fixes PR6146.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94731 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bdbb81d0628e15b5a7ea2ae1c466bb283ab740ef
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:54:33 2010 +0000
    
        switch blackfin to the default runOnMachineFunction
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94729 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b825faaddebc861ee06af5a50cef78e645275673
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:50:22 2010 +0000
    
        eliminate a now-useless class.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94728 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d9df5d79183e13a0ce6915c184f47d7c01e72dc9
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:48:52 2010 +0000
    
        Switch MSP430, SPU, Sparc, and SystemZ to use EmitFunctionBody().
    
        Diffstat:
         6 files changed, 30 insertions(+), 284 deletions(-)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94727 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f59d57d36935e921c12785690477d3bfdba9d69d
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Thu Jan 28 01:45:32 2010 +0000
    
        Update of 94055 to track the IR level call site information via an intrinsic.
        This allows code gen and the exception table writer to cooperate to make sure
        landing pads are associated with the correct invoke locations.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94726 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bf7dadf717673b9d494f9930f6618236feb2ce7f
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Jan 28 01:41:20 2010 +0000
    
        Record the death of ModuleProvier and GhostLinkage in the release notes and
        give upgrade instructions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94723 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 57d849fe78589bb84ec82db82cdbcb6dc88207b4
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:28:58 2010 +0000
    
        Give AsmPrinter the most common expected implementation of
        runOnMachineFunction, and switch PPC to use EmitFunctionBody.
        The two ppc asmprinters now don't heave to define
        runOnMachineFunction.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94722 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4e47dee4ccf9640938dd81cb93a9fd1ab26f5874
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Thu Jan 28 01:14:43 2010 +0000
    
        Truncate the release notes so they're ready to accumulate notes for the 2.7 release.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94720 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eb0602789a69e4764c70321424eb552a5d33d9cc
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:10:34 2010 +0000
    
        switch ARM to EmitFunctionBody().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94719 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 07a24263e9daeb0d702fb8890e17347733327732
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:06:32 2010 +0000
    
        emit a 0 byte instead of a noop if a function is empty on darwin.
        "0" is nice and target independent.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94718 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bea3e68096198f2d2b453b7e2f94303f61f587b0
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 01:02:27 2010 +0000
    
        Remove the argument from EmitJumpTableInfo, because it doesn't need it.
    
        Move the X86 implementation of function body emission up to
        AsmPrinter::EmitFunctionBody, which works by calling the virtual
        EmitInstruction method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94716 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 692276d99e08f224f58ed5207ce4e8310137f72e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 00:19:24 2010 +0000
    
        Drop the argument to AsmPrinter::EmitConstantPool and make it virtual.
        Overload it in the ARM backend to do nothing, since is does insane
        constant pool emission.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94708 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c1b68d104bed6ca4fb4040a29b1cda52c2343e85
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 00:15:18 2010 +0000
    
        don't emit constant pools twice.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94706 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b3c5f49d35d0468f92e6c02242cf6ef9dccbdb84
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Thu Jan 28 00:05:10 2010 +0000
    
        rename printVisibility to EmitVisibility and make it private,
        constify EmitLinkage.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94705 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e12efcf52becf0152d0f58877fb0b6207b81d6ef
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 23:58:11 2010 +0000
    
        switch ARM to use EmitFunctionHeader.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94703 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit df6c7e6ba3e6068ffd02540034da342e87a44e6d
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 23:37:36 2010 +0000
    
        eliminate the ARMFunctionInfo::Align member, using
        MachineFunction::Alignment instead.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94701 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0f0b5d27b3d7da40809549ea3163cad91b54ad34
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 23:35:43 2010 +0000
    
        add a helper function for bumping up the alignment of a machine function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94700 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a9ae1a57883cc746c8ed9eedef5e8b5bb33a5c30
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 23:26:37 2010 +0000
    
        switch blackfin to use EmitFunctionHeader.  BlackfinAsmPrinter.cpp
        is now less than 200 LOC!
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94699 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9d5f9ea90027b7f170feda84b1d475ba9b861a15
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 23:23:58 2010 +0000
    
        switch mips to use the shared EmitFunctionHeader() function
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94698 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9926d38af912f56ed3d6e24d76ed9f73a0887395
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Jan 27 23:20:51 2010 +0000
    
        Changed constants to an enum so as not to pollute the
        global namespace needlessly.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94697 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6060faac72ff45287e952a53f7a58a4ac36cf9d
    Author: Sean Callanan <scallanan at apple.com>
    Date:   Wed Jan 27 23:03:46 2010 +0000
    
        Added a header file defining the externally-visible C API
        for the LLVM disassemblers.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94696 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dfa3d94ab52cb5e551f740deaaaa1e5d2a7f8ad4
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Jan 27 22:12:36 2010 +0000
    
        If the only use of something is a DEBUG_VALUE, don't
        let that stop it from being deleted, and change the
        DEBUG_VALUE value to undef.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94694 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 02e6dd46f178a58440557dbfa41df4cc8ed88203
    Author: Dale Johannesen <dalej at apple.com>
    Date:   Wed Jan 27 22:11:16 2010 +0000
    
        Treat MO_REG 0 location as undefined in DEBUG_VALUE,
        per document.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94693 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9dbdbe8bee1358d3435be1898374c48263fb00e5
    Author: Dan Gohman <gohman at apple.com>
    Date:   Wed Jan 27 22:06:46 2010 +0000
    
        Add an svn:ignore.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94692 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit f0c5a8634d4dade44fec9a1abc76b20c5f1a8fca
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Jan 27 22:03:03 2010 +0000
    
        Need to recurse for all operands of function-local metadata; and handle Instructions (which map to themselves)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94691 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 662a0c306457aed9147f582e64ed3642d4aa7a8b
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Wed Jan 27 22:01:02 2010 +0000
    
        Avoid creating redundant PHIs in SSAUpdater::GetValueInMiddleOfBlock.
        This was already being done in SSAUpdater::GetValueAtEndOfBlock so I've
        just changed SSAUpdater to check for existing PHIs in both places.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94690 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dbf472700fa3dbfb9568126b52d28ada35b13ece
    Author: Ted Kremenek <kremenek at apple.com>
    Date:   Wed Jan 27 20:44:12 2010 +0000
    
        Update CMake build.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94687 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 752ff1fb26fc819498de04d342d496f146100f2a
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Wed Jan 27 20:34:15 2010 +0000
    
        Kill ModuleProvider and ghost linkage by inverting the relationship between
        Modules and ModuleProviders. Because the "ModuleProvider" simply materializes
        GlobalValues now, and doesn't provide modules, it's renamed to
        "GVMaterializer". Code that used to need a ModuleProvider to materialize
        Functions can now materialize the Functions directly. Functions no longer use a
        magic linkage to record that they're materializable; they simply ask the
        GVMaterializer.
    
        Because the C ABI must never change, we can't remove LLVMModuleProviderRef or
        the functions that refer to it. Instead, because Module now exposes the same
        functionality ModuleProvider used to, we store a Module* in any
        LLVMModuleProviderRef and translate in the wrapper methods.  The bindings to
        other languages still use the ModuleProvider concept.  It would probably be
        worth some time to update them to follow the C++ more closely, but I don't
        intend to do it.
    
        Fixes http://llvm.org/PR5737 and http://llvm.org/PR5735.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94686 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 844446082c4a29f4c23e18ff5225890a3606a4c6
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Jan 27 19:58:47 2010 +0000
    
        Don't bother with sprintf, just pass the Twine through.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94684 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d238cc10f8fe9b9e568c76ed3943a48a0b2985a0
    Author: Benjamin Kramer <benny.kra at googlemail.com>
    Date:   Wed Jan 27 19:46:52 2010 +0000
    
        Use the less expensive getName function instead of getNameStr.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94683 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit bed204de2b223a441b1c4582ff813e5b664052c6
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Wed Jan 27 10:36:15 2010 +0000
    
        Quick fix to a test that is currently failing on every Linux build bot. No idea
        if this is the "correct" fix, but it seems a strict improvement.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94675 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c93a316297a3545cbd6a8924a34e4370fe6d1905
    Author: Chandler Carruth <chandlerc at gmail.com>
    Date:   Wed Jan 27 10:27:10 2010 +0000
    
        Silence GCC warnings with asserts turned off. No functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94673 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fbbb0b1faaf20465d85b540684eb90a7fe1d0178
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Jan 27 10:13:28 2010 +0000
    
        Make SMDiagnostic::Print a const method.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94672 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 0b79bae89b28eb9a4ab4dc4ec6c62985e69a10d4
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Wed Jan 27 10:13:11 2010 +0000
    
        Trailing whitespace.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94671 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 47bc2f65663192c4d8ac781b685d94d7144f5cea
    Author: Duncan Sands <baldrick at free.fr>
    Date:   Wed Jan 27 10:08:08 2010 +0000
    
        Revert commit 94666 (ddunbar) [Suppress clang warning about unused arguments].
        It causes g++ to complain: unrecognized option '-Qunused-arguments'
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94670 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 118ed0eb33493ab604b6b24ad89e6348fd68195e
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 07:21:55 2010 +0000
    
        add a new AsmPrinter::EmitFunctionEntryLabel virtual function,
        which allows targets to override function entry label emission.
        Use it to convert linux/ppc to use EmitFunctionHeader().
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94667 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cdd25445c0cdfe9def326ff61b6dbe6ddac69b48
    Author: Daniel Dunbar <daniel at zuster.org>
    Date:   Wed Jan 27 07:10:10 2010 +0000
    
        Suppress clang warning about unused arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94666 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit aac8d95fefc1f8efef19e0ca1e1d7f6f2575d6b8
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Jan 27 06:25:16 2010 +0000
    
        Perform trivial tail call optimization for callees with "C" ABI. These are done
        even when -tailcallopt is not specified and it does not require changing ABI.
        First case is the most trivial one. Perform tail call optimization when both
        the caller and callee do not return values and when the callee does not take
        any input arguments.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94664 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e453d94b9cedea849c6aec9fbfcbe395cadbde8f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 02:18:21 2010 +0000
    
        merge two ifs
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94650 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 78b48b61971bdb388971e9dd7473cc4661052159
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 02:12:20 2010 +0000
    
        some cleanups.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94649 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e347d24803b32d9b25ad30b1f604b63cc7606eb5
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 02:04:20 2010 +0000
    
        no need to check for null
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94648 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fe398973c7ced1a4af9a925bfeecbb6527a096d5
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Jan 27 01:44:40 2010 +0000
    
        Remove a dead target hook.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94646 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bf770db031ef3e9348cc5db7d836d545ac0b1b7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 01:02:43 2010 +0000
    
        ppc/linux isn't ready for this and it was an accident that it was included.
        This should fix a bunch of linux buildbot failures.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94643 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 12dbb7096ea5f08cdce6099a28539ae9f217870b
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Jan 27 00:44:36 2010 +0000
    
        When converting dbg.declare to dbg.value, attach promoted store's debug metadata to dbg.value
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94634 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 25a2a85e32636a3c09169c9c25de7509404dac4c
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Wed Jan 27 00:30:42 2010 +0000
    
        Linker needs to do deep-copy of function-local metadata to update references to function arguments
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94632 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 89cff97315984f1b30b2d2ca1470d684410d878f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 00:20:02 2010 +0000
    
        use existing basic block numbers instead of recomputing
        a new set of them.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94631 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e0e44f66356645c49f1a3e9a6807f6828e9de5e7
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Wed Jan 27 00:17:20 2010 +0000
    
        Switch MSP430, CellSPU, SystemZ, Darwin/PPC, Alpha, and Sparc to
        EmitFunctionHeader:
    
        7 files changed, 16 insertions(+), 210 deletions(-)
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94630 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6c55bfe437197d0d34edb6887da2a2dd29eb6abe
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Jan 27 00:10:09 2010 +0000
    
        Clarify what -tailcallopt option actually do.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94628 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8f240d49cb730fa95b7752aec624f14e26977443
    Author: Jim Grosbach <grosbach at apple.com>
    Date:   Wed Jan 27 00:07:20 2010 +0000
    
        Adjust setjmp instruction sequence to not need 32-bit alignment padding
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94627 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 8904bc5f834bde215cbaba8a898f739db1dfc673
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Jan 27 00:07:07 2010 +0000
    
        Eliminate target hook IsEligibleForTailCallOptimization.
    
        Target independent isel should always pass along the "tail call" property. Change
        target hook LowerCall's parameter "isTailCall" into a refernce. If the target
        decides it's impossible to honor the tail call request, it should set isTailCall
        to false to make target independent isel happy.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94626 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a0045f69c5b838f81cc0bf3a1c5512fe6b204988
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Wed Jan 27 00:00:57 2010 +0000
    
        Restore to pre-94570 state.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94625 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b799bdfb839df00dd357b0f63ddce9086e2183ed
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:53:39 2010 +0000
    
        mcize label emission for functions.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94624 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a8174307f67b59afd8ba5355d05f0c8262554b31
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:51:52 2010 +0000
    
        use EmitLinkage for functions as well as globals.  One output
        change is that we now use ".linkonce discard" for global variables
        instead of ".linkonce samesize".  These should be the same, just less
        strict.  If anyone is interested in mcizing MCSection for COFF targets,
        this should be easy to fix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94623 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 31ffe0af9fac830a223c3a577ecac5bbaaecceae
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:47:12 2010 +0000
    
        pull linkage emission code out to a new EmitLinkage function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94621 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit d1e30988695b7a16e8e116395c951068a8ddbe0f
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:41:48 2010 +0000
    
        rearrange some directives, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94620 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7bca23ef9d02ce81b17c2964dd4411f71bb66481
    Author: Jeffrey Yasskin <jyasskin at google.com>
    Date:   Tue Jan 26 23:30:46 2010 +0000
    
        Roll r94484 (avoiding RTTI problems in tests) forward again in a way that isn't
        broken by setting CXXFLAGS on the command line.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94619 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 2548e91a283df66d482004ed3e973d2c381c5af7
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Jan 26 23:29:09 2010 +0000
    
        Avoid extra calls to MD->getNumOperands()
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94618 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 045d60450f7220d2f1446057058bd72bd6ed323d
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Jan 26 23:28:40 2010 +0000
    
        Ignore 'forced' tailcall opt in fastisel mode.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94617 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7c329dc12d3a5172ce163bf6e4cc84969f9c2e58
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:26:29 2010 +0000
    
        remove a noop function.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94616 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fcd69992ced3281fc9d0630806d30be2feaab5a2
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Jan 26 23:21:56 2010 +0000
    
        Fix inline cost predictions with SCIENCE.
    
        After running a batch of measurements, it is clear that the inliner metrics
        need some adjustments:
    
        Own argument bonus:       20 -> 5
        Outgoing argument penalty: 0 -> 5
        Alloca bonus:             10 -> 5
        Constant instr bonus:      7 -> 5
        Dead successor bonus:     40 -> 5*(avg instrs/block)
    
        The new cost metrics are generaly 25 points higher than before, so we may need
        to move thresholds.
    
        With this change, InlineConstants::CallPenalty becomes a political correction:
    
        if (!isa<IntrinsicInst>(II) && !callIsSmall(CS.getCalledFunction()))
          NumInsts += InlineConstants::CallPenalty + CS.arg_size();
    
        The code size is accurately modelled by CS.arg_size(). CallPenalty is added
        because calls tend to take a long time, so it may not be worth it to inline a
        function with lots of calls.
    
        All of the political corrections are in the InlineConstants namespace:
        IndirectCallBonus, CallPenalty, LastCallToStaticBonus, ColdccPenalty,
        NoreturnPenalty.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94615 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6af8d0c03c0957ea52590ac15a7deb2dbb5b3bab
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:18:44 2010 +0000
    
        now that enough stuff is constified, move function header printing
        logic up from X86 into the common code.  The other targets will
        hopefully start using this soon.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94614 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 6139e33352f91d8deceac847e532bd56d69ee8d3
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:18:02 2010 +0000
    
        constify a bunch of dwarf stuff now that the registerinfo method
        is constified.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94613 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 305c54bb02ce0ce0360b20d8562e299bc69dbd60
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 23:15:09 2010 +0000
    
        constify a method argument.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94612 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1082b75a14004f924335a44a7e6aae4d8d5957f7
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Jan 26 23:13:04 2010 +0000
    
        Allow some automatic tailcall optimization without changing ABI.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94611 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit fd80f9ce1de542f63d4676bea34825b55da1ce64
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Jan 26 23:07:57 2010 +0000
    
        Delete blank lines that bug me.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94610 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 36f987f0a6c1b88f47baec98947da0fa28c0fe34
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 22:06:58 2010 +0000
    
        call emitconstantpool and emitjumptable like other targets.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94601 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 514f9f89c41d672737168b62f36f5835dc656161
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Jan 26 22:03:41 2010 +0000
    
        Before existing NamedMDNode entry in the symbol table, remove any existing entry with the same name.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94600 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ce1f25476c67b572f4d82899a9ed693cf1115f9b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 21:53:08 2010 +0000
    
        emit jump table an alias ".set" directives through MCStreamer as
        assignments.
    
        .set x, a-b
    
        is the same as:
    
        x = a-b
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94596 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 268ec7835e66eda747bbda73b54366b3dde2665c
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 21:51:43 2010 +0000
    
        fix CastInst::castIsValid to reject aggregate types, fixing PR6153:
    
        llvm-as: t.ll:1:25: error: invalid cast opcode for cast from '[4 x i8]' to '[1 x i32]'
        @x = constant [1 x i32] bitcast ([4 x i8] c"abcd" to [1 x i32])
                                ^
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94595 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit eec5843c809c182a12560f67c0d54aa86ef9e97b
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Jan 26 21:42:58 2010 +0000
    
        Remve unnecessary include.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94594 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit cc04b87311d24f0b8b2d12a75fe442c06302aeb1
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Jan 26 21:39:14 2010 +0000
    
        Use AssertingVH, just to be paranoid.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94593 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 68c3e203d9af2c4e1694235b02dd61fc2b5eb6cc
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Jan 26 21:31:35 2010 +0000
    
        Revert test polarity to match comment and desired outcome. Remove undeserved bonus.
    
        A GEP with all constant indices is already considered free by
        analyzeBasicBlock(), so don't give it an extra bonus in
        CountCodeReductionForAlloca().
    
        This patch should remove a small positive bias toward inlining functions with
        variable-index GEPs, and remove a smaller negative bias from functions with
        all-constant index GEPs.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94591 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 822b6122a5b7a6fdf120ac8304381b601cb1f0ee
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Jan 26 21:31:30 2010 +0000
    
        Remove dead code.
    
        Functions containing indirectbr are marked NeverInline by analyzeBasicBlock(),
        so there is no point in giving indirectbr special treatment in
        CountCodeReductionForConstant. It is never called.
    
        No functional change intended.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94590 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b66489dcbe91da300bea69d18f8a086d7d503f5e
    Author: Jakob Stoklund Olesen <stoklund at 2pi.dk>
    Date:   Tue Jan 26 21:31:24 2010 +0000
    
        Skip calculation of ArgumentWeights if it will never be used.
    
        Save a few bytes by allocating the correct size vector.
    
        No functional change intended.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94589 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e6e109bd42e8b8fd138f4daf95c30c5b48e6eafa
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Jan 26 21:16:06 2010 +0000
    
        Emit DW_AT_containing_type attribute for a class if containing type is known.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94587 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c8898d2736985e2a14e9cda816d7e640397cfeb1
    Author: Devang Patel <dpatel at apple.com>
    Date:   Tue Jan 26 21:14:59 2010 +0000
    
        Add extra element to composite type. This new element will be used to record c++ class that holds current class's vtable.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94586 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit c317dc5547141e68c337cfe885ee69b44fcb9a6b
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 20:40:54 2010 +0000
    
        Eliminate SetDirective, and replace it with HasSetDirective.
        Default HasSetDirective to true, since most targets have it.
    
        The targets that claim to not have it probably do, or it is
        spelled differently. These include Blackfin, Mips, Alpha, and
        PIC16.  All of these except pic16 are normal ELF targets, so
        they almost certainly have it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94585 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ca334f0696d794f91a2499b3c32843a2dab2a842
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Jan 26 20:36:21 2010 +0000
    
        Delete dead code.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94583 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7d5ae0ed3cb6222e90157ef381678efc6397134c
    Author: Rafael Espindola <rafael.espindola at gmail.com>
    Date:   Tue Jan 26 20:21:43 2010 +0000
    
        Emit .comm alignment in bytes but .align in powers of 2 for ARM ELF.
    
        Original patch by Sandeep Patel and updated by me.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94582 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 1b6a1a0319984ad0b8aa7a53c329e25258725e20
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 20:20:43 2010 +0000
    
        eliminate MCAsmInfo::NeedsSet: we now just use .set on any platform
        that has it.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94581 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit e9644e95834edf5b112906b858222eca75621946
    Author: Chris Lattner <sabre at nondot.org>
    Date:   Tue Jan 26 20:17:34 2010 +0000
    
        don't set to the default value.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94580 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ab3cf043ce28d0236360938d71ada4f499d749c5
    Author: Junjie Gu <jgu222 at gmail.com>
    Date:   Tue Jan 26 19:45:17 2010 +0000
    
        test commit.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94578 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3bbeaad4db989fd4f2a2fbaff3861a369b8e25de
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 19:25:59 2010 +0000
    
        -disable-output is no longer needed with -analyze.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94574 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 35640b2bfaf054ec644de402407dda59a2638d92
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 19:19:05 2010 +0000
    
        Make the unsigned-range code more consistent with the signed-range code,
        and clean up some loose ends.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94572 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 3cd6cea76a29447f5057207d07b486af197ae024
    Author: Evan Cheng <evan.cheng at apple.com>
    Date:   Tue Jan 26 19:04:47 2010 +0000
    
        Code refactoring, no functionality change.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94570 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 7590bb0e7d4de62df0993a8c7573b73d6509808f
    Author: Bob Wilson <bob.wilson at apple.com>
    Date:   Tue Jan 26 19:04:37 2010 +0000
    
        Revert 94484.  Re-disable unittests that need RTTI.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94569 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 897516339c2c8ed92066b170200b6a19a045f81f
    Author: Victor Hernandez <vhernandez at apple.com>
    Date:   Tue Jan 26 18:57:53 2010 +0000
    
        Switch AllocaDbgDeclares to SmallVector and don't leak DIFactory
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94567 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ba05500da36091ffa0c867831d779ca45a868685
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 18:32:54 2010 +0000
    
        Fix a typo in a comment that Duncan noticed.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94562 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 4a24bfd614af412601a08f9644de5f9e72eb827b
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 18:30:24 2010 +0000
    
        Remove SIL, DIL, and BPL from the GR8_NOREX allocation order also.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94560 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit dfb17221accd7197bc194888ae093f90a9636705
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 18:14:22 2010 +0000
    
        SIL, DIL, BPL, and SPL require a REX prefix.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94558 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit a2a8efbb47714fca780e31b4e3634bef578668b6
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 16:46:18 2010 +0000
    
        Rename ItCount to BECount, since it holds a backedge-taken count rather
        than an iteration count.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94549 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 135a294519f851b3b701846581e0cd586e036b55
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 16:04:20 2010 +0000
    
        Fix ICmpInst::makeConstantRange to use ConstantRange's API properly
        in the case of empty and full ranges.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94548 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 9c32772f586f832b770de187e6822091ee38bc63
    Author: Dan Gohman <gohman at apple.com>
    Date:   Tue Jan 26 15:56:18 2010 +0000
    
        Fix a typo that several people pointed out. Also, address the case of
        wrapping that Duncan pointed out.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94547 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b690d9e8e1653bdf7a13e082918652f9448a01f1
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Jan 26 14:55:44 2010 +0000
    
        Support -arch.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94546 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit ead5e4cce0b949bfecb15cd2bd3ae56ca195888a
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Jan 26 14:55:30 2010 +0000
    
        Support for -iquote.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94545 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit b746f712542767bc7e4c3444793212ac0e3110f5
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Jan 26 14:55:16 2010 +0000
    
        Better error message.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94544 91177308-0d34-0410-b5e6-96231b3b80d8
    
    commit 276fe43e5323899b7c9eadcc80477af0e2d75d32
    Author: Mikhail Glushenkov <foldr at codedgers.com>
    Date:   Tue Jan 26 14:55:04 2010 +0000
    
        Escape double quotes in 'help'.
    
        git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@94543 91177308-0d34-0410-b5e6-96231b3b80d8

diff --git a/libclamav/c++/llvm/Makefile.rules b/libclamav/c++/llvm/Makefile.rules
index 79a2e01..761cc81 100644
--- a/libclamav/c++/llvm/Makefile.rules
+++ b/libclamav/c++/llvm/Makefile.rules
@@ -1574,6 +1574,11 @@ $(ObjDir)/%GenDisassemblerTables.inc.tmp : %.td $(ObjDir)/.dir
 	$(Echo) "Building $(<F) disassembly tables with tblgen"
 	$(Verb) $(TableGen) -gen-disassembler -o $(call SYSPATH, $@) $<
 
+$(TARGET:%=$(ObjDir)/%GenEDInfo.inc.tmp): \
+$(ObjDir)/%GenEDInfo.inc.tmp : %.td $(ObjDir)/.dir
+	$(Echo) "Building $(<F) enhanced disassembly information with tblgen"
+	$(Verb) $(TableGen) -gen-enhanced-disassembly-info -o $(call SYSPATH, $@) $<
+
 $(TARGET:%=$(ObjDir)/%GenFastISel.inc.tmp): \
 $(ObjDir)/%GenFastISel.inc.tmp : %.td $(ObjDir)/.dir
 	$(Echo) "Building $(<F) \"fast\" instruction selector implementation with tblgen"
diff --git a/libclamav/c++/llvm/README.txt b/libclamav/c++/llvm/README.txt
index c78a9ee..7388752 100644
--- a/libclamav/c++/llvm/README.txt
+++ b/libclamav/c++/llvm/README.txt
@@ -10,4 +10,3 @@ the license agreement found in LICENSE.txt.
 
 Please see the HTML documentation provided in docs/index.html for further
 assistance with LLVM.
-
diff --git a/libclamav/c++/llvm/autoconf/configure.ac b/libclamav/c++/llvm/autoconf/configure.ac
index c839a0c..df47343 100644
--- a/libclamav/c++/llvm/autoconf/configure.ac
+++ b/libclamav/c++/llvm/autoconf/configure.ac
@@ -49,7 +49,6 @@ AC_CONFIG_SRCDIR([lib/VMCore/Module.cpp])
 dnl Place all of the extra autoconf files into the config subdirectory. Tell
 dnl various tools where the m4 autoconf macros are.
 AC_CONFIG_AUX_DIR([autoconf])
-AC_CONFIG_MACRO_DIR([m4])
 
 dnl Quit if the source directory has already been configured.
 dnl NOTE: This relies upon undocumented autoconf behavior.
@@ -728,13 +727,13 @@ fi
 
 dnl --enable-libffi : check whether the user wants to turn off libffi:
 AC_ARG_ENABLE(libffi,AS_HELP_STRING(
-  --enable-libffi,[Check for the presence of libffi (default is YES)]),,
-  enableval=yes)
-case "$enableval" in
-  yes) llvm_cv_enable_libffi="yes" ;;
-  no)  llvm_cv_enable_libffi="no"  ;;
-  *) AC_MSG_ERROR([Invalid setting for --enable-libffi. Use "yes" or "no"]) ;;
-esac
+  --enable-libffi,[Check for the presence of libffi (default is NO)]),
+  [case "$enableval" in
+    yes) llvm_cv_enable_libffi="yes" ;;
+    no)  llvm_cv_enable_libffi="no"  ;;
+    *) AC_MSG_ERROR([Invalid setting for --enable-libffi. Use "yes" or "no"]) ;;
+  esac],
+  llvm_cv_enable_libffi=no)
 
 dnl Only Windows needs dynamic libCompilerDriver to support plugins.
 if test "$llvm_cv_os_type" = "Win32" ; then
@@ -1023,7 +1022,7 @@ dnl libffi is optional; used to call external functions from the interpreter
 if test "$llvm_cv_enable_libffi" = "yes" ; then
   AC_SEARCH_LIBS(ffi_call,ffi,AC_DEFINE([HAVE_FFI_CALL],[1],
                  [Define if libffi is available on this platform.]),
-                 AC_MSG_WARN([libffi not found - disabling external calls from interpreter]))
+                 AC_MSG_ERROR([libffi not found - configure without --enable-libffi to compile without it]))
 fi
 
 dnl mallinfo is optional; the code can compile (minus features) without it
diff --git a/libclamav/c++/llvm/configure b/libclamav/c++/llvm/configure
index b247fc2..e07043a 100755
--- a/libclamav/c++/llvm/configure
+++ b/libclamav/c++/llvm/configure
@@ -1482,7 +1482,7 @@ Optional Features:
                           %a (default is YES)
   --enable-bindings       Build specific language bindings:
                           all,auto,none,{binding-name} (default=auto)
-  --enable-libffi         Check for the presence of libffi (default is YES)
+  --enable-libffi         Check for the presence of libffi (default is NO)
   --enable-llvmc-dynamic  Link LLVMC dynamically (default is NO, unless on
                           Win32)
   --enable-llvmc-dynamic-plugins
@@ -2486,7 +2486,6 @@ ac_configure="$SHELL $ac_aux_dir/configure"  # Please don't use this var.
 
 
 
-
 if test ${srcdir} != "." ; then
   if test -f ${srcdir}/include/llvm/Config/config.h ; then
     as_fn_error "Already configured in ${srcdir}" "$LINENO" 5
@@ -5139,16 +5138,15 @@ fi
 
 # Check whether --enable-libffi was given.
 if test "${enable_libffi+set}" = set; then :
-  enableval=$enable_libffi;
+  enableval=$enable_libffi; case "$enableval" in
+    yes) llvm_cv_enable_libffi="yes" ;;
+    no)  llvm_cv_enable_libffi="no"  ;;
+    *) as_fn_error "Invalid setting for --enable-libffi. Use \"yes\" or \"no\"" "$LINENO" 5 ;;
+  esac
 else
-  enableval=yes
+  llvm_cv_enable_libffi=no
 fi
 
-case "$enableval" in
-  yes) llvm_cv_enable_libffi="yes" ;;
-  no)  llvm_cv_enable_libffi="no"  ;;
-  *) as_fn_error "Invalid setting for --enable-libffi. Use \"yes\" or \"no\"" "$LINENO" 5 ;;
-esac
 
 if test "$llvm_cv_os_type" = "Win32" ; then
    llvmc_dynamic="yes"
@@ -9447,7 +9445,7 @@ else
   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
   lt_status=$lt_dlunknown
   cat > conftest.$ac_ext <<EOF
-#line 9450 "configure"
+#line 9448 "configure"
 #include "confdefs.h"
 
 #if HAVE_DLFCN_H
@@ -10222,8 +10220,7 @@ if test "$ac_res" != no; then :
 $as_echo "#define HAVE_FFI_CALL 1" >>confdefs.h
 
 else
-  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: libffi not found - disabling external calls from interpreter" >&5
-$as_echo "$as_me: WARNING: libffi not found - disabling external calls from interpreter" >&2;}
+  as_fn_error "libffi not found - configure without --enable-libffi to compile without it" "$LINENO" 5
 fi
 
 fi
diff --git a/libclamav/c++/llvm/docs/CommandGuide/lit.pod b/libclamav/c++/llvm/docs/CommandGuide/lit.pod
index 246fc66..72d9d2b 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/lit.pod
+++ b/libclamav/c++/llvm/docs/CommandGuide/lit.pod
@@ -49,13 +49,13 @@ Show the B<lit> help message.
 
 =item B<-j> I<N>, B<--threads>=I<N>
 
-Run I<N> tests in parallel. By default, this is automatically chose to match the
-number of detected available CPUs.
+Run I<N> tests in parallel. By default, this is automatically chosen to match
+the number of detected available CPUs.
 
 =item B<--config-prefix>=I<NAME>
 
 Search for I<NAME.cfg> and I<NAME.site.cfg> when searching for test suites,
-instead I<lit.cfg> and I<lit.site.cfg>.
+instead of I<lit.cfg> and I<lit.site.cfg>.
 
 =item B<--param> I<NAME>, B<--param> I<NAME>=I<VALUE>
 
@@ -237,7 +237,7 @@ creating a new B<lit> testing implementation, or extending an existing one.
 
 B<lit> proper is primarily an infrastructure for discovering and running
 arbitrary tests, and to expose a single convenient interface to these
-tests. B<lit> itself doesn't contain know how to run tests, rather this logic is
+tests. B<lit> itself doesn't know how to run tests, rather this logic is
 defined by I<test suites>.
 
 =head2 TEST SUITES
diff --git a/libclamav/c++/llvm/docs/CommandGuide/llvm-db.pod b/libclamav/c++/llvm/docs/CommandGuide/llvm-db.pod
deleted file mode 100644
index 1324176..0000000
--- a/libclamav/c++/llvm/docs/CommandGuide/llvm-db.pod
+++ /dev/null
@@ -1,16 +0,0 @@
-=pod
-
-=head1 NAME
-
-llvm-db - LLVM debugger (alpha)
-
-=head1 SYNOPSIS
-
-Details coming soon. Please see 
-L<http://llvm.org/docs/SourceLevelDebugging.html> in the meantime.
-
-=head1 AUTHORS
-
-Maintained by the LLVM Team (L<http://llvm.org>).
-
-=cut
diff --git a/libclamav/c++/llvm/docs/CommandGuide/llvm-extract.pod b/libclamav/c++/llvm/docs/CommandGuide/llvm-extract.pod
index b62e8ae..02f38ad 100644
--- a/libclamav/c++/llvm/docs/CommandGuide/llvm-extract.pod
+++ b/libclamav/c++/llvm/docs/CommandGuide/llvm-extract.pod
@@ -34,7 +34,13 @@ B<llvm-extract> will write raw bitcode regardless of the output device.
 
 =item B<--func> I<function-name>
 
-Extract the function named I<function-name> from the LLVM bitcode.
+Extract the function named I<function-name> from the LLVM bitcode. May be
+specified multiple times to extract multiple functions at once.
+
+=item B<--glob> I<global-name>
+
+Extract the global variable named I<global-name> from the LLVM bitcode. May be
+specified multiple times to extract multiple global variables at once.
 
 =item B<--help>
 
diff --git a/libclamav/c++/llvm/docs/ExceptionHandling.html b/libclamav/c++/llvm/docs/ExceptionHandling.html
index 438edda..9c7c615 100644
--- a/libclamav/c++/llvm/docs/ExceptionHandling.html
+++ b/libclamav/c++/llvm/docs/ExceptionHandling.html
@@ -39,6 +39,7 @@
   	<li><a href="#llvm_eh_sjlj_setjmp"><tt>llvm.eh.sjlj.setjmp</tt></a></li>
   	<li><a href="#llvm_eh_sjlj_longjmp"><tt>llvm.eh.sjlj.longjmp</tt></a></li>
   	<li><a href="#llvm_eh_sjlj_lsda"><tt>llvm.eh.sjlj.lsda</tt></a></li>
+  	<li><a href="#llvm_eh_sjlj_callsite"><tt>llvm.eh.sjlj.callsite</tt></a></li>
   </ol></li>
   <li><a href="#asm">Asm Table Formats</a>
   <ol>
@@ -509,6 +510,24 @@
 </div>
 
 <!-- ======================================================================= -->
+<div class="doc_subsubsection">
+  <a name="llvm_eh_sjlj_callsite">llvm.eh.sjlj.callsite</a>
+</div>
+
+<div class="doc_text">
+
+<pre>
+  void %<a href="#llvm_eh_sjlj_callsite">llvm.eh.sjlj.callsite</a>(i32)
+</pre>
+
+<p>For SJLJ based exception handling, the <a href="#llvm_eh_sjlj_callsite">
+  <tt>llvm.eh.sjlj.callsite</tt></a> intrinsic identifies the callsite value
+  associated with the following invoke instruction. This is used to ensure
+  that landing pad entries in the LSDA are generated in the matching order.</p>
+
+</div>
+
+<!-- ======================================================================= -->
 <div class="doc_section">
   <a name="asm">Asm Table Formats</a>
 </div>
diff --git a/libclamav/c++/llvm/docs/GettingStarted.html b/libclamav/c++/llvm/docs/GettingStarted.html
index c27101e..89253b6 100644
--- a/libclamav/c++/llvm/docs/GettingStarted.html
+++ b/libclamav/c++/llvm/docs/GettingStarted.html
@@ -256,13 +256,13 @@ software you will need.</p>
   <td>Cygwin/Win32</td>
   <td>x86<sup><a href="#pf_1">1</a>,<a href="#pf_8">8</a>,
      <a href="#pf_11">11</a></sup></td>
-  <td>GCC 3.4.X, binutils 2.15</td>
+  <td>GCC 3.4.X, binutils 2.20</td>
 </tr>
 <tr>
   <td>MinGW/Win32</td>
   <td>x86<sup><a href="#pf_1">1</a>,<a href="#pf_6">6</a>,
      <a href="#pf_8">8</a>, <a href="#pf_10">10</a></sup></td>
-  <td>GCC 3.4.X, binutils 2.15</td>
+  <td>GCC 3.4.X, binutils 2.20</td>
 </tr>
 </table>
 
@@ -318,12 +318,8 @@ up</a></li>
 <li><a name="pf_5">The GCC-based C/C++ frontend does not build</a></li>
 <li><a name="pf_6">The port is done using the MSYS shell.</a></li>
 <li><a name="pf_7">Native code generation exists but is not complete.</a></li>
-<li><a name="pf_8">Binutils</a> up to post-2.17 has bug in bfd/cofflink.c
-    preventing LLVM from building correctly. Several workarounds have been
-    introduced into LLVM build system, but the bug can occur anytime in the
-    future. We highly recommend that you rebuild your current binutils with the
-    patch from <a href="http://sourceware.org/bugzilla/show_bug.cgi?id=2659">
-    Binutils bugzilla</a>, if it wasn't already applied.</li>
+<li><a name="pf_8">Binutils 2.20 or later is required to build the assembler
+    generated by LLVM properly.</a></li>
 <li><a name="pf_9">XCode 2.5 and gcc 4.0.1</a> (Apple Build 5370) will trip
     internal LLVM assert messages when compiled for Release at optimization
     levels greater than 0 (i.e., <i>"-O1"</i> and higher).
diff --git a/libclamav/c++/llvm/docs/LangRef.html b/libclamav/c++/llvm/docs/LangRef.html
index c028f6b..b337b6a 100644
--- a/libclamav/c++/llvm/docs/LangRef.html
+++ b/libclamav/c++/llvm/docs/LangRef.html
@@ -66,12 +66,17 @@
       </li>
       <li><a href="#t_derived">Derived Types</a>
         <ol>
-          <li><a href="#t_array">Array Type</a></li>
+          <li><a href="#t_aggregate">Aggregate Types</a>
+            <ol>
+              <li><a href="#t_array">Array Type</a></li>
+              <li><a href="#t_struct">Structure Type</a></li>
+              <li><a href="#t_pstruct">Packed Structure Type</a></li>
+              <li><a href="#t_union">Union Type</a></li>
+              <li><a href="#t_vector">Vector Type</a></li>
+            </ol>
+          </li>
           <li><a href="#t_function">Function Type</a></li>
           <li><a href="#t_pointer">Pointer Type</a></li>
-          <li><a href="#t_struct">Structure Type</a></li>
-          <li><a href="#t_pstruct">Packed Structure Type</a></li>
-          <li><a href="#t_vector">Vector Type</a></li>
           <li><a href="#t_opaque">Opaque Type</a></li>
         </ol>
       </li>
@@ -1078,11 +1083,21 @@ define void @f() optsize { ... }
 </div>
 
 <dl>
+  <dt><tt><b>alignstack(&lt;<em>n</em>&gt;)</b></tt></dt>
+  <dd>This attribute indicates that, when emitting the prologue and epilogue,
+      the backend should forcibly align the stack pointer. Specify the
+      desired alignment, which must be a power of two, in parentheses.
+
   <dt><tt><b>alwaysinline</b></tt></dt>
   <dd>This attribute indicates that the inliner should attempt to inline this
       function into callers whenever possible, ignoring any active inlining size
       threshold for this caller.</dd>
 
+  <dt><tt><b>inlinehint</b></tt></dt>
+  <dd>This attribute indicates that the source code contained a hint that inlining
+      this function is desirable (such as the "inline" keyword in C/C++).  It
+      is just a hint; it imposes no requirements on the inliner.</dd>
+
   <dt><tt><b>noinline</b></tt></dt>
   <dd>This attribute indicates that the inliner should never inline this
       function in any situation. This attribute may not be used together with
@@ -1394,6 +1409,7 @@ Classifications</a> </div>
           <a href="#t_pointer">pointer</a>,
           <a href="#t_vector">vector</a>,
           <a href="#t_struct">structure</a>,
+          <a href="#t_union">union</a>,
           <a href="#t_array">array</a>,
           <a href="#t_label">label</a>,
           <a href="#t_metadata">metadata</a>.
@@ -1408,12 +1424,12 @@ Classifications</a> </div>
     </tr>
     <tr>
       <td><a href="#t_derived">derived</a></td>
-      <td><a href="#t_integer">integer</a>,
-          <a href="#t_array">array</a>,
+      <td><a href="#t_array">array</a>,
           <a href="#t_function">function</a>,
           <a href="#t_pointer">pointer</a>,
           <a href="#t_struct">structure</a>,
           <a href="#t_pstruct">packed structure</a>,
+          <a href="#t_union">union</a>,
           <a href="#t_vector">vector</a>,
           <a href="#t_opaque">opaque</a>.
       </td>
@@ -1551,6 +1567,21 @@ Classifications</a> </div>
    possible to have a two dimensional array, using an array as the element type
    of another array.</p>
 
+   
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"> <a name="t_aggregate">Aggregate Types</a> </div>
+
+<div class="doc_text">
+
+<p>Aggregate Types are a subset of derived types that can contain multiple
+  member types. <a href="#t_array">Arrays</a>,
+  <a href="#t_struct">structs</a>, <a href="#t_vector">vectors</a> and
+  <a href="#t_union">unions</a> are aggregate types.</p>
+
+</div>
+
 </div>
 
 <!-- _______________________________________________________________________ -->
@@ -1619,9 +1650,9 @@ Classifications</a> </div>
 <h5>Overview:</h5>
 <p>The function type can be thought of as a function signature.  It consists of
    a return type and a list of formal parameter types. The return type of a
-   function type is a scalar type, a void type, or a struct type.  If the return
-   type is a struct type then all struct elements must be of first class types,
-   and the struct must have at least one element.</p>
+   function type is a scalar type, a void type, a struct type, or a union
+   type.  If the return type is a struct type then all struct elements must be
+   of first class types, and the struct must have at least one element.</p>
 
 <h5>Syntax:</h5>
 <pre>
@@ -1744,6 +1775,53 @@ Classifications</a> </div>
 </div>
 
 <!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection"> <a name="t_union">Union Type</a> </div>
+
+<div class="doc_text">
+
+<h5>Overview:</h5>
+<p>A union type describes an object with size and alignment suitable for
+   an object of any one of a given set of types (also known as an "untagged"
+   union). It is similar in concept and usage to a
+   <a href="#t_struct">struct</a>, except that all members of the union
+   have an offset of zero. The elements of a union may be any type that has a
+   size. Unions must have at least one member - empty unions are not allowed.
+   </p>
+
+<p>The size of the union as a whole will be the size of its largest member,
+   and the alignment requirements of the union as a whole will be the largest
+   alignment requirement of any member.</p>
+
+<p>Unions members are accessed using '<tt><a href="#i_load">load</a></tt> and
+   '<tt><a href="#i_store">store</a></tt>' by getting a pointer to a field with
+   the '<tt><a href="#i_getelementptr">getelementptr</a></tt>' instruction.
+   Since all members are at offset zero, the getelementptr instruction does
+   not affect the address, only the type of the resulting pointer.</p>
+
+<h5>Syntax:</h5>
+<pre>
+  union { &lt;type list&gt; }
+</pre>
+
+<h5>Examples:</h5>
+<table class="layout">
+  <tr class="layout">
+    <td class="left"><tt>union { i32, i32*, float }</tt></td>
+    <td class="left">A union of three types: an <tt>i32</tt>, a pointer to
+      an <tt>i32</tt>, and a <tt>float</tt>.</td>
+  </tr><tr class="layout">
+    <td class="left">
+      <tt>union {&nbsp;float,&nbsp;i32&nbsp;(i32)&nbsp;*&nbsp;}</tt></td>
+    <td class="left">A union, where the first element is a <tt>float</tt> and the
+      second element is a <a href="#t_pointer">pointer</a> to a
+      <a href="#t_function">function</a> that takes an <tt>i32</tt>, returning
+      an <tt>i32</tt>.</td>
+  </tr>
+</table>
+
+</div>
+
+<!-- _______________________________________________________________________ -->
 <div class="doc_subsubsection"> <a name="t_pointer">Pointer Type</a> </div>
 
 <div class="doc_text">
@@ -1981,6 +2059,14 @@ Classifications</a> </div>
       the number and types of elements must match those specified by the
       type.</dd>
 
+  <dt><b>Union constants</b></dt>
+  <dd>Union constants are represented with notation similar to a structure with
+      a single element - that is, a single typed element surrounded
+      by braces (<tt>{}</tt>)).  For example: "<tt>{ i32 4 }</tt>".  The
+      <a href="#t_union">union type</a> can be initialized with a single-element
+      struct as long as the type of the struct element matches the type of
+      one of the union members.</dd>
+
   <dt><b>Array constants</b></dt>
   <dd>Array constants are represented with notation similar to array type
      definitions (a comma separated list of elements, surrounded by square
@@ -1999,7 +2085,8 @@ Classifications</a> </div>
 
   <dt><b>Zero initialization</b></dt>
   <dd>The string '<tt>zeroinitializer</tt>' can be used to zero initialize a
-      value to zero of <em>any</em> type, including scalar and aggregate types.
+      value to zero of <em>any</em> type, including scalar and
+      <a href="#t_aggregate">aggregate</a> types.
       This is often used to avoid having to print large zero initializers
       (e.g. for large arrays) and is always exactly equivalent to using explicit
       zero initializers.</dd>
@@ -3835,7 +3922,8 @@ Instruction</a> </div>
 
 <div class="doc_text">
 
-<p>LLVM supports several instructions for working with aggregate values.</p>
+<p>LLVM supports several instructions for working with
+  <a href="#t_aggregate">aggregate</a> values.</p>
 
 </div>
 
@@ -3852,14 +3940,14 @@ Instruction</a> </div>
 </pre>
 
 <h5>Overview:</h5>
-<p>The '<tt>extractvalue</tt>' instruction extracts the value of a struct field
-   or array element from an aggregate value.</p>
+<p>The '<tt>extractvalue</tt>' instruction extracts the value of a member field
+   from an <a href="#t_aggregate">aggregate</a> value.</p>
 
 <h5>Arguments:</h5>
 <p>The first operand of an '<tt>extractvalue</tt>' instruction is a value
-   of <a href="#t_struct">struct</a> or <a href="#t_array">array</a> type.  The
-   operands are constant indices to specify which value to extract in a similar
-   manner as indices in a
+   of <a href="#t_struct">struct</a>, <a href="#t_union">union</a>  or
+   <a href="#t_array">array</a> type.  The operands are constant indices to
+   specify which value to extract in a similar manner as indices in a
    '<tt><a href="#i_getelementptr">getelementptr</a></tt>' instruction.</p>
 
 <h5>Semantics:</h5>
@@ -3886,16 +3974,15 @@ Instruction</a> </div>
 </pre>
 
 <h5>Overview:</h5>
-<p>The '<tt>insertvalue</tt>' instruction inserts a value into a struct field or
-   array element in an aggregate.</p>
-
+<p>The '<tt>insertvalue</tt>' instruction inserts a value into a member field
+   in an <a href="#t_aggregate">aggregate</a> value.</p>
 
 <h5>Arguments:</h5>
 <p>The first operand of an '<tt>insertvalue</tt>' instruction is a value
-   of <a href="#t_struct">struct</a> or <a href="#t_array">array</a> type.  The
-   second operand is a first-class value to insert.  The following operands are
-   constant indices indicating the position at which to insert the value in a
-   similar manner as indices in a
+   of <a href="#t_struct">struct</a>, <a href="#t_union">union</a> or
+   <a href="#t_array">array</a> type.  The second operand is a first-class
+   value to insert.  The following operands are constant indices indicating
+   the position at which to insert the value in a similar manner as indices in a
    '<tt><a href="#i_getelementptr">getelementptr</a></tt>' instruction.  The
    value to insert must have the same type as the value identified by the
    indices.</p>
@@ -4097,8 +4184,8 @@ Instruction</a> </div>
 
 <h5>Overview:</h5>
 <p>The '<tt>getelementptr</tt>' instruction is used to get the address of a
-   subelement of an aggregate data structure. It performs address calculation
-   only and does not access memory.</p>
+   subelement of an <a href="#t_aggregate">aggregate</a> data structure.
+   It performs address calculation only and does not access memory.</p>
 
 <h5>Arguments:</h5>
 <p>The first argument is always a pointer, and forms the basis of the
@@ -4108,15 +4195,15 @@ Instruction</a> </div>
    indexes the pointer value given as the first argument, the second index
    indexes a value of the type pointed to (not necessarily the value directly
    pointed to, since the first index can be non-zero), etc. The first type
-   indexed into must be a pointer value, subsequent types can be arrays, vectors
-   and structs. Note that subsequent types being indexed into can never be
-   pointers, since that would require loading the pointer before continuing
-   calculation.</p>
+   indexed into must be a pointer value, subsequent types can be arrays,
+   vectors, structs and unions. Note that subsequent types being indexed into
+   can never be pointers, since that would require loading the pointer before
+   continuing calculation.</p>
 
 <p>The type of each index argument depends on the type it is indexing into.
-   When indexing into a (optionally packed) structure, only <tt>i32</tt> integer
-   <b>constants</b> are allowed.  When indexing into an array, pointer or
-   vector, integers of any width are allowed, and they are not required to be
+   When indexing into a (optionally packed) structure or union, only <tt>i32</tt>
+   integer <b>constants</b> are allowed.  When indexing into an array, pointer
+   or vector, integers of any width are allowed, and they are not required to be
    constant.</p>
 
 <p>For example, let's consider a C code fragment and how it gets compiled to
diff --git a/libclamav/c++/llvm/docs/ProgrammersManual.html b/libclamav/c++/llvm/docs/ProgrammersManual.html
index 7845d99..a37eca2 100644
--- a/libclamav/c++/llvm/docs/ProgrammersManual.html
+++ b/libclamav/c++/llvm/docs/ProgrammersManual.html
@@ -150,6 +150,7 @@ with another <tt>Value</tt></a> </li>
     <li><a href="#shutdown">Ending execution with <tt>llvm_shutdown()</tt></a></li>
     <li><a href="#managedstatic">Lazy initialization with <tt>ManagedStatic</tt></a></li>
     <li><a href="#llvmcontext">Achieving Isolation with <tt>LLVMContext</tt></a></li>
+    <li><a href="#jitthreading">Threads and the JIT</a></li>
   </ul>
   </li>
 
@@ -2386,9 +2387,9 @@ failure of the initialization.  Failure typically indicates that your copy of
 LLVM was built without multithreading support, typically because GCC atomic
 intrinsics were not found in your system compiler.  In this case, the LLVM API
 will not be safe for concurrent calls.  However, it <em>will</em> be safe for
-hosting threaded applications in the JIT, though care must be taken to ensure
-that side exits and the like do not accidentally result in concurrent LLVM API
-calls.
+hosting threaded applications in the JIT, though <a href="#jitthreading">care
+must be taken</a> to ensure that side exits and the like do not accidentally
+result in concurrent LLVM API calls.
 </p>
 </div>
 
@@ -2485,6 +2486,34 @@ isolation is not a concern.
 </p>
 </div>
 
+<!-- ======================================================================= -->
+<div class="doc_subsection">
+  <a name="jitthreading">Threads and the JIT</a>
+</div>
+
+<div class="doc_text">
+<p>
+LLVM's "eager" JIT compiler is safe to use in threaded programs.  Multiple
+threads can call <tt>ExecutionEngine::getPointerToFunction()</tt> or
+<tt>ExecutionEngine::runFunction()</tt> concurrently, and multiple threads can
+run code output by the JIT concurrently.  The user must still ensure that only
+one thread accesses IR in a given <tt>LLVMContext</tt> while another thread
+might be modifying it.  One way to do that is to always hold the JIT lock while
+accessing IR outside the JIT (the JIT <em>modifies</em> the IR by adding
+<tt>CallbackVH</tt>s).  Another way is to only
+call <tt>getPointerToFunction()</tt> from the <tt>LLVMContext</tt>'s thread.
+</p>
+
+<p>When the JIT is configured to compile lazily (using
+<tt>ExecutionEngine::DisableLazyCompilation(false)</tt>), there is currently a
+<a href="http://llvm.org/bugs/show_bug.cgi?id=5184">race condition</a> in
+updating call sites after a function is lazily-jitted.  It's still possible to
+use the lazy JIT in a threaded program if you ensure that only one thread at a
+time can call any particular lazy stub and that the JIT lock guards any IR
+access, but we suggest using only the eager JIT in threaded programs.
+</p>
+</div>
+
 <!-- *********************************************************************** -->
 <div class="doc_section">
   <a name="advanced">Advanced Topics</a>
diff --git a/libclamav/c++/llvm/docs/ReleaseNotes.html b/libclamav/c++/llvm/docs/ReleaseNotes.html
index f3d87c6..88c7de0 100644
--- a/libclamav/c++/llvm/docs/ReleaseNotes.html
+++ b/libclamav/c++/llvm/docs/ReleaseNotes.html
@@ -4,17 +4,17 @@
 <head>
   <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
   <link rel="stylesheet" href="llvm.css" type="text/css">
-  <title>LLVM 2.6 Release Notes</title>
+  <title>LLVM 2.7 Release Notes</title>
 </head>
 <body>
 
-<div class="doc_title">LLVM 2.6 Release Notes</div>
+<div class="doc_title">LLVM 2.7 Release Notes</div>
 
 <ol>
   <li><a href="#intro">Introduction</a></li>
   <li><a href="#subproj">Sub-project Status Update</a></li>
-  <li><a href="#externalproj">External Projects Using LLVM 2.6</a></li>
-  <li><a href="#whatsnew">What's New in LLVM 2.6?</a></li>
+  <li><a href="#externalproj">External Projects Using LLVM 2.7</a></li>
+  <li><a href="#whatsnew">What's New in LLVM 2.7?</a></li>
   <li><a href="GettingStarted.html">Installation Instructions</a></li>
   <li><a href="#portability">Portability and Supported Platforms</a></li>
   <li><a href="#knownproblems">Known Problems</a></li>
@@ -25,6 +25,12 @@
   <p>Written by the <a href="http://llvm.org">LLVM Team</a></p>
 </div>
 
+<h1 style="color:red">These are in-progress notes for the upcoming LLVM 2.7
+release.<br>
+You may prefer the
+<a href="http://llvm.org/releases/2.6/docs/ReleaseNotes.html">LLVM 2.6
+Release Notes</a>.</h1>
+
 <!-- *********************************************************************** -->
 <div class="doc_section">
   <a name="intro">Introduction</a>
@@ -34,7 +40,7 @@
 <div class="doc_text">
 
 <p>This document contains the release notes for the LLVM Compiler
-Infrastructure, release 2.6.  Here we describe the status of LLVM, including
+Infrastructure, release 2.7.  Here we describe the status of LLVM, including
 major improvements from the previous release and significant known problems.
 All LLVM releases may be downloaded from the <a
 href="http://llvm.org/releases/">LLVM releases web site</a>.</p>
@@ -63,7 +69,7 @@ Almost dead code.
 -->
  
    
-<!-- Unfinished features in 2.6:
+<!-- Features that need text if they're finished for 2.7:
   gcc plugin.
   strong phi elim
   variable debug info for optimized code
@@ -94,7 +100,7 @@ Almost dead code.
 
 <div class="doc_text">
 <p>
-The LLVM 2.6 distribution currently consists of code from the core LLVM
+The LLVM 2.7 distribution currently consists of code from the core LLVM
 repository (which roughly includes the LLVM optimizers, code generators
 and supporting tools), the Clang repository and the llvm-gcc repository.  In
 addition to this code, the LLVM Project includes other sub-projects that are in
@@ -111,31 +117,12 @@ development.  Here we include updates on these subprojects.
 
 <div class="doc_text">
 
-<p>The <a href="http://clang.llvm.org/">Clang project</a> is an effort to build
-a set of new 'LLVM native' front-end technologies for the C family of languages.
-LLVM 2.6 is the first release to officially include Clang, and it provides a
-production quality C and Objective-C compiler.  If you are interested in <a 
-href="http://clang.llvm.org/performance.html">fast compiles</a> and
-<a href="http://clang.llvm.org/diagnostics.html">good diagnostics</a>, we
-encourage you to try it out.  Clang currently compiles typical Objective-C code
-3x faster than GCC and compiles C code about 30% faster than GCC at -O0 -g
-(which is when the most pressure is on the frontend).</p>
-
-<p>In addition to supporting these languages, C++ support is also <a
-href="http://clang.llvm.org/cxx_status.html">well under way</a>, and mainline
-Clang is able to parse the libstdc++ 4.2 headers and even codegen simple apps.
-If you are interested in Clang C++ support or any other Clang feature, we
-strongly encourage you to get involved on the <a 
-href="http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev">Clang front-end mailing
-list</a>.</p>
-
-<p>In the LLVM 2.6 time-frame, the Clang team has made many improvements:</p>
+<p>The <a href="http://clang.llvm.org/">Clang project</a> is ...</p>
+
+<p>In the LLVM 2.7 time-frame, the Clang team has made many improvements:</p>
 
 <ul>
-<li>C and Objective-C support are now considered production quality.</li>
-<li>AuroraUX, FreeBSD and OpenBSD are now supported.</li>
-<li>Most of Objective-C 2.0 is now supported with the GNU runtime.</li>
-<li>Many many bugs are fixed and lots of features have been added.</li>
+<li>...</li>
 </ul>
 </div>
 
@@ -146,24 +133,13 @@ list</a>.</p>
 
 <div class="doc_text">
 
-<p>Previously announced in the 2.4 and 2.5 LLVM releases, the Clang project also
+<p>Previously announced in the 2.4, 2.5, and 2.6 LLVM releases, the Clang project also
 includes an early stage static source code analysis tool for <a
 href="http://clang.llvm.org/StaticAnalysis.html">automatically finding bugs</a>
 in C and Objective-C programs. The tool performs checks to find
 bugs that occur on a specific path within a program.</p>
 
-<p>In the LLVM 2.6 time-frame, the analyzer core has undergone several important
-improvements and cleanups and now includes a new <em>Checker</em> interface that
-is intended to eventually serve as a basis for domain-specific checks. Further,
-in addition to generating HTML files for reporting analysis results, the
-analyzer can now also emit bug reports in a structured XML format that is
-intended to be easily readable by other programs.</p>
-
-<p>The set of checks performed by the static analyzer continues to expand, and
-future plans for the tool include full source-level inter-procedural analysis
-and deeper checks such as buffer overrun detection. There are many opportunities
-to extend and enhance the static analyzer, and anyone interested in working on
-this project is encouraged to get involved!</p>
+<p>In the LLVM 2.7 time-frame, the analyzer core has ...</p>
 
 </div>
 
@@ -180,20 +156,13 @@ implementation of the CLI) using LLVM for static and just-in-time
 compilation.</p>
 
 <p>
-VMKit version 0.26 builds with LLVM 2.6 and you can find it on its
+VMKit version ?? builds with LLVM 2.7 and you can find it on its
 <a href="http://vmkit.llvm.org/releases/">web page</a>. The release includes
 bug fixes, cleanup and new features. The major changes are:</p>
 
 <ul>
 
-<li>A new llcj tool to generate shared libraries or executables of Java
-    files.</li>
-<li>Cooperative garbage collection. </li>
-<li>Fast subtype checking (paper from Click et al [JGI'02]). </li>
-<li>Implementation of a two-word header for Java objects instead of the original
-    three-word header. </li>
-<li>Better Java specification-compliance: division by zero checks, stack
-    overflow checks, finalization and references support. </li>
+<li>...</li>
 
 </ul>
 </div>
@@ -249,22 +218,7 @@ KLEE.</p>
 The goal of <a href="http://dragonegg.llvm.org/">DragonEgg</a> is to make
 gcc-4.5 act like llvm-gcc without requiring any gcc modifications whatsoever.
 <a href="http://dragonegg.llvm.org/">DragonEgg</a> is a shared library (dragonegg.so)
-that is loaded by gcc at runtime.  It uses the new gcc plugin architecture to
-disable the GCC optimizers and code generators, and schedule the LLVM optimizers
-and code generators (or direct output of LLVM IR) instead.  Currently only Linux
-and Darwin are supported, and only on x86-32 and x86-64.  It should be easy to
-add additional unix-like architectures and other processor families.  In theory
-it should be possible to use <a href="http://dragonegg.llvm.org/">DragonEgg</a>
-with any language supported by gcc, however only C and Fortran work well for the
-moment.  Ada and C++ work to some extent, while Java, Obj-C and Obj-C++ are so
-far entirely untested.  Since gcc-4.5 has not yet been released, neither has
-<a href="http://dragonegg.llvm.org/">DragonEgg</a>.  To build
-<a href="http://dragonegg.llvm.org/">DragonEgg</a> you will need to check out the
-development versions of <a href="http://gcc.gnu.org/svn.html/"> gcc</a>,
-<a href="http://llvm.org/docs/GettingStarted.html#checkout">llvm</a> and
-<a href="http://dragonegg.llvm.org/">DragonEgg</a> from their respective
-subversion repositories, and follow the instructions in the
-<a href="http://dragonegg.llvm.org/">DragonEgg</a> README.
+that is loaded by gcc at runtime.  It ...
 </p>
 
 </div>
@@ -277,29 +231,7 @@ subversion repositories, and follow the instructions in the
 
 <div class="doc_text">
 <p>
-The LLVM Machine Code (MC) Toolkit project is a (very early) effort to build
-better tools for dealing with machine code, object file formats, etc.  The idea
-is to be able to generate most of the target specific details of assemblers and
-disassemblers from existing LLVM target .td files (with suitable enhancements),
-and to build infrastructure for reading and writing common object file formats.
-One of the first deliverables is to build a full assembler and integrate it into
-the compiler, which is predicted to substantially reduce compile time in some
-scenarios.
-</p>
-
-<p>In the LLVM 2.6 timeframe, the MC framework has grown to the point where it
-can reliably parse and pretty print (with some encoding information) a
-darwin/x86 .s file successfully, and has the very early phases of a Mach-O
-assembler in progress.  Beyond the MC framework itself, major refactoring of the
-LLVM code generator has started.  The idea is to make the code generator reason
-about the code it is producing in a much more semantic way, rather than a
-textual way.  For example, the code generator now uses MCSection objects to
-represent section assignments, instead of text strings that print to .section
-directives.</p>
-
-<p>MC is an early and ongoing project that will hopefully continue to lead to
-many improvements in the code generator and build infrastructure useful for many
-other situations.
+The LLVM Machine Code (MC) Toolkit project is ...
 </p>
 
 </div>	
@@ -307,7 +239,7 @@ other situations.
 
 <!-- *********************************************************************** -->
 <div class="doc_section">
-  <a name="externalproj">External Open Source Projects Using LLVM 2.6</a>
+  <a name="externalproj">External Open Source Projects Using LLVM 2.7</a>
 </div>
 <!-- *********************************************************************** -->
 
@@ -315,7 +247,7 @@ other situations.
 
 <p>An exciting aspect of LLVM is that it is used as an enabling technology for
    a lot of other language and tools projects.  This section lists some of the
-   projects that have already been updated to work with LLVM 2.6.</p>
+   projects that have already been updated to work with LLVM 2.7.</p>
 </div>
 
 
@@ -376,8 +308,8 @@ built-in list and matrix support (including list and matrix comprehensions) and
 an easy-to-use C interface. The interpreter uses LLVM as a backend to
  JIT-compile Pure programs to fast native code.</p>
 
-<p>Pure versions 0.31 and later have been tested and are known to work with
-LLVM 2.6 (and continue to work with older LLVM releases >= 2.3 as well).
+<p>Pure versions ??? and later have been tested and are known to work with
+LLVM 2.7 (and continue to work with older LLVM releases >= 2.3 as well).
 </p>
 </div>
 
@@ -460,7 +392,7 @@ code.
 
 <!-- *********************************************************************** -->
 <div class="doc_section">
-  <a name="whatsnew">What's New in LLVM 2.6?</a>
+  <a name="whatsnew">What's New in LLVM 2.7?</a>
 </div>
 <!-- *********************************************************************** -->
 
@@ -480,28 +412,10 @@ in this section.
 
 <div class="doc_text">
 
-<p>LLVM 2.6 includes several major new capabilities:</p>
+<p>LLVM 2.7 includes several major new capabilities:</p>
 
 <ul>
-<li>New <a href="#compiler-rt">compiler-rt</a>, <A href="#klee">KLEE</a>
-    and <a href="#mc">machine code toolkit</a> sub-projects.</li>
-<li>Debug information now includes line numbers when optimizations are enabled.
-    This allows statistical sampling tools like OProfile and Shark to map
-    samples back to source lines.</li>
-<li>LLVM now includes new experimental backends to support the MSP430, SystemZ
-    and BlackFin architectures.</li>
-<li>LLVM supports a new <a href="GoldPlugin.html">Gold Linker Plugin</a> which
-    enables support for <a href="LinkTimeOptimization.html">transparent
-    link-time optimization</a> on ELF targets when used with the Gold binutils
-    linker.</li>
-<li>LLVM now supports doing optimization and code generation on multiple 
-    threads.  Please see the <a href="ProgrammersManual.html#threading">LLVM
-    Programmer's Manual</a> for more information.</li>
-<li>LLVM now has experimental support for <a
-    href="http://nondot.org/~sabre/LLVMNotes/EmbeddedMetadata.txt">embedded
-    metadata</a> in LLVM IR, though the implementation is not guaranteed to be
-    final and the .bc file format may change in future releases.  Debug info 
-    does not yet use this format in LLVM 2.6.</li>
+<li>...</li>
 </ul>
 
 </div>
@@ -516,50 +430,7 @@ in this section.
 expose new optimization opportunities:</p>
 
 <ul>
-<li>The <a href="LangRef.html#i_add">add</a>, <a 
-    href="LangRef.html#i_sub">sub</a> and <a href="LangRef.html#i_mul">mul</a>
-    instructions have been split into integer and floating point versions (like
-    divide and remainder), introducing new <a
-    href="LangRef.html#i_fadd">fadd</a>, <a href="LangRef.html#i_fsub">fsub</a>,
-    and <a href="LangRef.html#i_fmul">fmul</a> instructions.</li>
-<li>The <a href="LangRef.html#i_add">add</a>, <a 
-    href="LangRef.html#i_sub">sub</a> and <a href="LangRef.html#i_mul">mul</a>
-    instructions now support optional "nsw" and "nuw" bits which indicate that
-    the operation is guaranteed to not overflow (in the signed or
-    unsigned case, respectively).  This gives the optimizer more information and
-    can be used for things like C signed integer values, which are undefined on
-    overflow.</li>
-<li>The <a href="LangRef.html#i_sdiv">sdiv</a> instruction now supports an
-    optional "exact" flag which indicates that the result of the division is
-    guaranteed to have a remainder of zero.  This is useful for optimizing pointer
-    subtraction in C.</li>
-<li>The <a href="LangRef.html#i_getelementptr">getelementptr</a> instruction now
-    supports arbitrary integer index values for array/pointer indices.  This
-    allows for better code generation on 16-bit pointer targets like PIC16.</li>
-<li>The <a href="LangRef.html#i_getelementptr">getelementptr</a> instruction now
-    supports an "inbounds" optimization hint that tells the optimizer that the
-    pointer is guaranteed to be within its allocated object.</li>
-<li>LLVM now support a series of new linkage types for global values which allow
-    for better optimization and new capabilities:
-    <ul>
-    <li><a href="LangRef.html#linkage_linkonce">linkonce_odr</a> and
-        <a href="LangRef.html#linkage_weak">weak_odr</a> have the same linkage
-        semantics as the non-"odr" linkage types.  The difference is that these
-        linkage types indicate that all definitions of the specified function
-        are guaranteed to have the same semantics.  This allows inlining
-        templates functions in C++ but not inlining weak functions in C,
-        which previously both got the same linkage type.</li>
-    <li><a href="LangRef.html#linkage_available_externally">available_externally
-        </a> is a new linkage type that gives the optimizer visibility into the
-        definition of a function (allowing inlining and side effect analysis)
-        but that does not cause code to be generated.  This allows better
-        optimization of "GNU inline" functions, extern templates, etc.</li>
-    <li><a href="LangRef.html#linkage_linker_private">linker_private</a> is a
-        new linkage type (which is only useful on Mac OS X) that is used for
-        some metadata generation and other obscure things.</li>
-    </ul></li>
-<li>Finally, target-specific intrinsics can now return multiple values, which
-    is useful for modeling target operations with multiple results.</li>
+<li>...</li>
 </ul>
 
 </div>
@@ -576,23 +447,7 @@ release includes a few major enhancements and additions to the optimizers:</p>
 
 <ul>
 
-<li>The <a href="Passes.html#scalarrepl">Scalar Replacement of Aggregates</a>
-    pass has many improvements that allow it to better promote vector unions,
-    variables which are memset, and much more strange code that can happen to
-    do bitfield accesses to register operations.  An interesting change is that
-    it now produces "unusual" integer sizes (like i1704) in some cases and lets
-    other optimizers clean things up.</li>
-<li>The <a href="Passes.html#loop-reduce">Loop Strength Reduction</a> pass now
-    promotes small integer induction variables to 64-bit on 64-bit targets,
-    which provides a major performance boost for much numerical code.  It also
-    promotes shorts to int on 32-bit hosts, etc.  LSR now also analyzes pointer
-    expressions (e.g. getelementptrs), as well as integers.</li>
-<li>The <a href="Passes.html#gvn">GVN</a> pass now eliminates partial
-    redundancies of loads in simple cases.</li>
-<li>The <a href="Passes.html#inline">Inliner</a> now reuses stack space when
-    inlining similar arrays from multiple callees into one caller.</li>
-<li>LLVM includes a new experimental Static Single Information (SSI)
-    construction pass.</li>
+<li>...</li>
 
 </ul>
 
@@ -607,17 +462,15 @@ release includes a few major enhancements and additions to the optimizers:</p>
 <div class="doc_text">
 
 <ul>
-<li>LLVM has a new "EngineBuilder" class which makes it more obvious how to
-    set up and configure an ExecutionEngine (a JIT or interpreter).</li>
-<li>The JIT now supports generating more than 16M of code.</li>
-<li>When configured with <tt>--with-oprofile</tt>, the JIT can now inform
-     OProfile about JIT'd code, allowing OProfile to get line number and function
-     name information for JIT'd functions.</li>
-<li>When "libffi" is available, the LLVM interpreter now uses it, which supports
-    calling almost arbitrary external (natively compiled) functions.</li>
-<li>Clients of the JIT can now register a 'JITEventListener' object to receive
-    callbacks when the JIT emits or frees machine code. The OProfile support
-    uses this mechanism.</li>
+<li>The JIT now <a
+href="http://llvm.org/viewvc/llvm-project?view=rev&revision=85295">defaults
+to compiling eagerly</a> to avoid a race condition in the lazy JIT.
+Clients that still want the lazy JIT can switch it on by calling
+<tt>ExecutionEngine::DisableLazyCompilation(false)</tt>.</li>
+<li>It is now possible to create more than one JIT instance in the same process.
+These JITs can generate machine code in parallel,
+although <a href="http://llvm.org/docs/ProgrammersManual.html#jitthreading">you
+still have to obey the other threading restrictions</a>.</li>
 </ul>
 
 </div>
@@ -635,54 +488,7 @@ it run faster:</p>
 
 <ul>
 
-<li>The <tt>llc -asm-verbose</tt> option (exposed from llvm-gcc as <tt>-dA</tt>
-    and clang as <tt>-fverbose-asm</tt> or <tt>-dA</tt>) now adds a lot of 
-    useful information in comments to
-    the generated .s file.  This information includes location information (if
-    built with <tt>-g</tt>) and loop nest information.</li>
-<li>The code generator now supports a new MachineVerifier pass which is useful
-    for finding bugs in targets and codegen passes.</li>
-<li>The Machine LICM is now enabled by default.  It hoists instructions out of
-    loops (such as constant pool loads, loads from read-only stubs, vector
-    constant synthesization code, etc.) and is currently configured to only do
-    so when the hoisted operation can be rematerialized.</li>
-<li>The Machine Sinking pass is now enabled by default.  This pass moves
-    side-effect free operations down the CFG so that they are executed on fewer
-    paths through a function.</li>
-<li>The code generator now performs "stack slot coloring" of register spills,
-    which allows spill slots to be reused.  This leads to smaller stack frames
-    in cases where there are lots of register spills.</li>
-<li>The register allocator has many improvements to take better advantage of
-    commutable operations, various spiller peephole optimizations, and can now
-    coalesce cross-register-class copies.</li>
-<li>Tblgen now supports multiclass inheritance and a number of new string and
-    list operations like <tt>!(subst)</tt>, <tt>!(foreach)</tt>, <tt>!car</tt>,
-    <tt>!cdr</tt>, <tt>!null</tt>, <tt>!if</tt>, <tt>!cast</tt>.
-    These make the .td files more expressive and allow more aggressive factoring
-    of duplication across instruction patterns.</li>
-<li>Target-specific intrinsics can now be added without having to hack VMCore to
-    add them.  This makes it easier to maintain out-of-tree targets.</li>
-<li>The instruction selector is better at propagating information about values
-    (such as whether they are sign/zero extended etc.) across basic block
-    boundaries.</li>
-<li>The SelectionDAG datastructure has new nodes for representing buildvector
-    and <a href="http://llvm.org/PR2957">vector shuffle</a> operations.  This
-    makes operations and pattern matching more efficient and easier to get
-    right.</li>
-<li>The Prolog/Epilog Insertion Pass now has experimental support for performing
-    the "shrink wrapping" optimization, which moves spills and reloads around in
-    the CFG to avoid doing saves on paths that don't need them.</li>
-<li>LLVM includes new experimental support for writing ELF .o files directly
-    from the compiler.  It works well for many simple C testcases, but doesn't
-    support exception handling, debug info, inline assembly, etc.</li>
-<li>Targets can now specify register allocation hints through
-    <tt>MachineRegisterInfo::setRegAllocationHint</tt>. A regalloc hint consists
-    of hint type and physical register number. A hint type of zero specifies a
-    register allocation preference. Other hint type values are target specific
-    which are resolved by <tt>TargetRegisterInfo::ResolveRegAllocHint</tt>. An
-    example is the ARM target which uses register hints to request that the
-    register allocator provide an even / odd register pair to two virtual
-    registers.</li>
+<li>...</li>
 </ul>
 </div>
 
@@ -697,31 +503,7 @@ it run faster:</p>
 
 <ul>
 
-<li>SSE 4.2 builtins are now supported.</li>
-<li>GCC-compatible soft float modes are now supported, which are typically used
-    by OS kernels.</li>
-<li>X86-64 now models implicit zero extensions better, which allows the code
-    generator to remove a lot of redundant zexts.  It also models the 8-bit "H"
-    registers as subregs, which allows them to be used in some tricky
-    situations.</li>
-<li>X86-64 now supports the "local exec" and "initial exec" thread local storage
-    model.</li>
-<li>The vector forms of the <a href="LangRef.html#i_icmp">icmp</a> and <a
-    href="LangRef.html#i_fcmp">fcmp</a> instructions now select to efficient
-    SSE operations.</li>
-<li>Support for the win64 calling conventions have improved.  The primary
-    missing feature is support for varargs function definitions.  It seems to
-    work well for many win64 JIT purposes.</li>
-<li>The X86 backend has preliminary support for <a 
-    href="CodeGenerator.html#x86_memory">mapping address spaces to segment
-    register references</a>.  This allows you to write GS or FS relative memory
-    accesses directly in LLVM IR for cases where you know exactly what you're
-    doing (such as in an OS kernel).  There are some known problems with this
-    support, but it works in simple cases.</li>
-<li>The X86 code generator has been refactored to move all global variable
-    reference logic to one place
-    (<tt>X86Subtarget::ClassifyGlobalReference</tt>) which
-    makes it easier to reason about.</li>
+<li>...</li>
 
 </ul>
 
@@ -737,11 +519,7 @@ it run faster:</p>
 </p>
 
 <ul>
-<li>Support for floating-point, indirect function calls, and
-    passing/returning aggregate types to functions.
-<li>The code generator is able to generate debug info into output COFF files.
-<li>Support for placing an object into a specific section or at a specific
-    address in memory.</li>
+<li>...</li>
 </ul>
 
 <p>Things not yet supported:</p>
@@ -764,22 +542,9 @@ it run faster:</p>
 
 <ul>
 
-<li>Preliminary support for processors, such as the Cortex-A8 and Cortex-A9,
-that implement version v7-A of the ARM architecture.  The ARM backend now
-supports both the Thumb2 and Advanced SIMD (Neon) instruction sets.</li>
-
-<li>The AAPCS-VFP "hard float" calling conventions are also supported with the
-<tt>-float-abi=hard</tt> flag.</li>
-
-<li>The ARM calling convention code is now tblgen generated instead of resorting
-    to C++ code.</li>
+<li>...</li>
 </ul>
 
-<p>These features are still somewhat experimental
-and subject to change. The Neon intrinsics, in particular, may change in future
-releases of LLVM.  ARMv7 support has progressed a lot on top of tree since 2.6
-branched.</p>
-
 
 </div>
 
@@ -793,11 +558,7 @@ branched.</p>
 </p>
 
 <ul>
-<li>Mips now supports O32 Calling Convention.</li>
-<li>Many improvements to the 32-bit PowerPC SVR4 ABI (used on powerpc-linux)
-    support, lots of bugs fixed.</li>
-<li>Added support for the 64-bit PowerPC SVR4 ABI (used on powerpc64-linux).
-    Needs more testing.</li>
+<li>...</li>
 </ul>
 
 </div>
@@ -814,40 +575,7 @@ branched.</p>
 </p>
 
 <ul>
-<li>New <a href="http://llvm.org/doxygen/PrettyStackTrace_8h-source.html">
-    <tt>PrettyStackTrace</tt> class</a> allows crashes of llvm tools (and applications
-    that integrate them) to provide more detailed indication of what the
-    compiler was doing at the time of the crash (e.g. running a pass).
-    At the top level for each LLVM tool, it includes the command line arguments.
-    </li>
-<li>New <a href="http://llvm.org/doxygen/StringRef_8h-source.html">StringRef</a>
-    and <a href="http://llvm.org/doxygen/Twine_8h-source.html">Twine</a> classes
-    make operations on character ranges and
-    string concatenation to be more efficient.  <tt>StringRef</tt> is just a <tt>const
-    char*</tt> with a length, <tt>Twine</tt> is a light-weight rope.</li>
-<li>LLVM has new <tt>WeakVH</tt>, <tt>AssertingVH</tt> and <tt>CallbackVH</tt>
-    classes, which make it easier to write LLVM IR transformations.  <tt>WeakVH</tt>
-    is automatically drops to null when the referenced <tt>Value</tt> is deleted,
-    and is updated across a <tt>replaceAllUsesWith</tt> operation.
-    <tt>AssertingVH</tt> aborts the program if the
-    referenced value is destroyed while it is being referenced.  <tt>CallbackVH</tt>
-    is a customizable class for handling value references.  See <a
-    href="http://llvm.org/doxygen/ValueHandle_8h-source.html">ValueHandle.h</a> 
-    for more information.</li>
-<li>The new '<a href="http://llvm.org/doxygen/Triple_8h-source.html">Triple
-    </a>' class centralizes a lot of logic that reasons about target
-    triples.</li>
-<li>The new '<a href="http://llvm.org/doxygen/ErrorHandling_8h-source.html">
-    llvm_report_error()</a>' set of APIs allows tools to embed the LLVM
-    optimizer and backend and recover from previously unrecoverable errors.</li>
-<li>LLVM has new abstractions for <a 
-    href="http://llvm.org/doxygen/Atomic_8h-source.html">atomic operations</a>
-    and <a href="http://llvm.org/doxygen/RWMutex_8h-source.html">reader/writer
-    locks</a>.</li>
-<li>LLVM has new <a href="http://llvm.org/doxygen/SourceMgr_8h-source.html">
-    <tt>SourceMgr</tt> and <tt>SMLoc</tt> classes</a> which implement caret
-    diagnostics and basic include stack processing for simple parsers. It is
-    used by tablegen, llvm-mc, the .ll parser and FileCheck.</li>
+<li>...</li>
 </ul>
 
 
@@ -862,32 +590,7 @@ branched.</p>
 <p>Other miscellaneous features include:</p>
 
 <ul>
-<li>LLVM now includes a new internal '<a 
-    href="http://llvm.org/cmds/FileCheck.html">FileCheck</a>' tool which allows
-    writing much more accurate regression tests that run faster.  Please see the
-    <a href="TestingGuide.html#FileCheck">FileCheck section of the Testing
-    Guide</a> for more information.</li>
-<li>LLVM profile information support has been significantly improved to produce
-correct use counts, and has support for edge profiling with reduced runtime
-overhead.  Combined, the generated profile information is both more correct and
-imposes about half as much overhead (2.6. from 12% to 6% overhead on SPEC
-CPU2000).</li>
-<li>The C bindings (in the llvm/include/llvm-c directory) include many newly
-    supported APIs.</li>
-<li>LLVM 2.6 includes a brand new experimental LLVM bindings to the Ada2005
-    programming language.</li>
-
-<li>The LLVMC driver has several new features:
-  <ul>
-  <li>Dynamic plugins now work on Windows.</li>
-  <li>New option property: init. Makes possible to provide default values for
-      options defined in plugins (interface to <tt>cl::init</tt>).</li>
-  <li>New example: Skeleton, shows how to create a standalone LLVMC-based
-      driver.</li>
-  <li>New example: mcc16, a driver for the PIC16 toolchain.</li>
-  </ul>
-</li>
-
+<li>...</li>
 </ul>
 
 </div>
@@ -901,24 +604,15 @@ CPU2000).</li>
 <div class="doc_text">
 
 <p>If you're already an LLVM user or developer with out-of-tree changes based
-on LLVM 2.5, this section lists some "gotchas" that you may run into upgrading
+on LLVM 2.6, this section lists some "gotchas" that you may run into upgrading
 from the previous release.</p>
 
 <ul>
-<li>The Itanium (IA64) backend has been removed.  It was not actively supported
-    and had bitrotted.</li>
-<li>The BigBlock register allocator has been removed, it had also bitrotted.</li>
-<li>The C Backend (<tt>-march=c</tt>) is no longer considered part of the LLVM release
-criteria.  We still want it to work, but no one is maintaining it and it lacks
-support for arbitrary precision integers and other important IR features.</li>
-
-<li>All LLVM tools now default to overwriting their output file, behaving more
-    like standard unix tools.  Previously, this only happened with the '<tt>-f</tt>'
-    option.</li>
-<li>LLVM build now builds all libraries as .a files instead of some
-  libraries as relinked .o files.  This requires some APIs like
-  InitializeAllTargets.h.
-  </li>
+<li>The LLVM interpreter now defaults to <em>not</em> using <tt>libffi</tt> even
+if you have it installed.  This makes it more likely that an LLVM built on one
+system will work when copied to a similar system.  To use <tt>libffi</tt>,
+configure with <tt>--enable-libffi</tt>.
+</li>
 </ul>
 
 
@@ -926,82 +620,30 @@ support for arbitrary precision integers and other important IR features.</li>
 API changes are:</p>
 
 <ul>
-<li>All uses of <tt>hash_set</tt> and <tt>hash_map</tt> have been removed from
-    the LLVM tree and the wrapper headers have been removed.</li>
-<li>The llvm/Streams.h and <tt>DOUT</tt> member of Debug.h have been removed.  The
-    <tt>llvm::Ostream</tt> class has been completely removed and replaced with
-    uses of <tt>raw_ostream</tt>.</li>
-<li>LLVM's global uniquing tables for <tt>Type</tt>s and <tt>Constant</tt>s have
-    been privatized into members of an <tt>LLVMContext</tt>.  A number of APIs
-    now take an <tt>LLVMContext</tt> as a parameter.  To smooth the transition
-    for clients that will only ever use a single context, the new 
-    <tt>getGlobalContext()</tt> API can be used to access a default global 
-    context which can be passed in any and all cases where a context is 
-    required.
-<li>The <tt>getABITypeSize</tt> methods are now called <tt>getAllocSize</tt>.</li>
-<li>The <tt>Add</tt>, <tt>Sub</tt> and <tt>Mul</tt> operators are no longer
-    overloaded for floating-point types. Floating-point addition, subtraction
-    and multiplication are now represented with new operators <tt>FAdd</tt>,
-    <tt>FSub</tt> and <tt>FMul</tt>. In the <tt>IRBuilder</tt> API,
-    <tt>CreateAdd</tt>, <tt>CreateSub</tt>, <tt>CreateMul</tt> and
-    <tt>CreateNeg</tt> should only be used for integer arithmetic now;
-    <tt>CreateFAdd</tt>, <tt>CreateFSub</tt>, <tt>CreateFMul</tt> and
-    <tt>CreateFNeg</tt> should now be used for floating-point arithmetic.</li>
-<li>The <tt>DynamicLibrary</tt> class can no longer be constructed, its functionality has
-    moved to static member functions.</li>
-<li><tt>raw_fd_ostream</tt>'s constructor for opening a given filename now
-    takes an extra <tt>Force</tt> argument. If <tt>Force</tt> is set to
-    <tt>false</tt>, an error will be reported if a file with the given name
-    already exists. If <tt>Force</tt> is set to <tt>true</tt>, the file will
-    be silently truncated (which is the behavior before this flag was
-    added).</li>
-<li><tt>SCEVHandle</tt> no longer exists, because reference counting is no
-    longer done for <tt>SCEV*</tt> objects, instead <tt>const SCEV*</tt>
-    should be used.</li>
-
-<li>Many APIs, notably <tt>llvm::Value</tt>, now use the <tt>StringRef</tt>
-and <tt>Twine</tt> classes instead of passing <tt>const char*</tt>
-or <tt>std::string</tt>, as described in
-the <a href="ProgrammersManual.html#string_apis">Programmer's Manual</a>. Most
-clients should be unaffected by this transition, unless they are used to
-<tt>Value::getName()</tt> returning a string. Here are some tips on updating to
-2.6:
-  <ul>
-    <li><tt>getNameStr()</tt> is still available, and matches the old
-      behavior. Replacing <tt>getName()</tt> calls with this is an safe option,
-      although more efficient alternatives are now possible.</li>
-
-    <li>If you were just relying on <tt>getName()</tt> being able to be sent to
-      a <tt>std::ostream</tt>, consider migrating
-      to <tt>llvm::raw_ostream</tt>.</li>
-      
-    <li>If you were using <tt>getName().c_str()</tt> to get a <tt>const
-        char*</tt> pointer to the name, you can use <tt>getName().data()</tt>.
-        Note that this string (as before), may not be the entire name if the
-        name contains embedded null characters.</li>
-
-    <li>If you were using <tt>operator +</tt> on the result of <tt>getName()</tt> and
-      treating the result as an <tt>std::string</tt>, you can either
-      use <tt>Twine::str</tt> to get the result as an <tt>std::string</tt>, or
-      could move to a <tt>Twine</tt> based design.</li>
-
-    <li><tt>isName()</tt> should be replaced with comparison
-      against <tt>getName()</tt> (this is now efficient).
-  </ul>
-</li>
+<li><tt>ModuleProvider</tt> has been <a
+href="http://llvm.org/viewvc/llvm-project?view=rev&revision=94686">removed</a>
+and its methods moved to <tt>Module</tt> and <tt>GlobalValue</tt>.
+Most clients can remove uses of <tt>ExistingModuleProvider</tt>,
+replace <tt>getBitcodeModuleProvider</tt> with
+<tt>getLazyBitcodeModule</tt>, and pass their <tt>Module</tt> to
+functions that used to accept <tt>ModuleProvider</tt>.  Clients who
+wrote their own <tt>ModuleProvider</tt>s will need to derive from
+<tt>GVMaterializer</tt> instead and use
+<tt>Module::setMaterializer</tt> to attach it to a
+<tt>Module</tt>.</li>
+
+<li><tt>GhostLinkage</tt> has given up the ghost.
+<tt>GlobalValue</tt>s that have not yet been read from their backing
+storage have the same linkage they will have after being read in.
+Clients must replace calls to
+<tt>GlobalValue::hasNotBeenReadFromBitcode</tt> with
+<tt>GlobalValue::isMaterializable</tt>.</li>
+
+<li>FIXME: Debug info has been totally redone. Add pointers to new APIs. Substantial caveats about compatibility of .ll and .bc files.</li>
+
+<li>The <tt>llvm/Support/DataTypes.h</tt> header has moved
+to <tt>llvm/System/DataTypes.h</tt>.</li>
 
-<li>The registration interfaces for backend Targets has changed (what was
-previously <tt>TargetMachineRegistry</tt>). For backend authors, see the <a
-href="WritingAnLLVMBackend.html#TargetRegistration">Writing An LLVM Backend</a>
-guide. For clients, the notable API changes are:
-  <ul>
-    <li><tt>TargetMachineRegistry</tt> has been renamed
-      to <tt>TargetRegistry</tt>.</li>
-
-    <li>Clients should move to using the <tt>TargetRegistry::lookupTarget()</tt>
-      function to find targets.</li>
-  </ul>
-</li>
 </ul>
 
 </div>
@@ -1055,8 +697,8 @@ there isn't already one.</p>
 <li>The llvm-gcc bootstrap will fail with some versions of binutils (e.g. 2.15)
     with a message of "<tt><a href="http://llvm.org/PR5004">Error: can not do 8
     byte pc-relative relocation</a></tt>" when building C++ code.  We intend to
-    fix this on mainline, but a workaround for 2.6 is to upgrade to binutils
-    2.17 or later.</li>
+    fix this on mainline, but a workaround is to upgrade to binutils 2.17 or
+    later.</li>
     
 <li>LLVM will not correctly compile on Solaris and/or OpenSolaris
 using the stock GCC 3.x.x series 'out the box',
diff --git a/libclamav/c++/llvm/docs/WritingAnLLVMBackend.html b/libclamav/c++/llvm/docs/WritingAnLLVMBackend.html
index 1457173..43766b5 100644
--- a/libclamav/c++/llvm/docs/WritingAnLLVMBackend.html
+++ b/libclamav/c++/llvm/docs/WritingAnLLVMBackend.html
@@ -354,8 +354,6 @@ public:
   // Pass Pipeline Configuration
   virtual bool addInstSelector(PassManagerBase &amp;PM, bool Fast);
   virtual bool addPreEmitPass(PassManagerBase &amp;PM, bool Fast);
-  virtual bool addAssemblyEmitter(PassManagerBase &amp;PM, bool Fast, 
-                                  std::ostream &amp;Out);
 };
 
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/docs/index.html b/libclamav/c++/llvm/docs/index.html
index 5c50c41..28f4cde 100644
--- a/libclamav/c++/llvm/docs/index.html
+++ b/libclamav/c++/llvm/docs/index.html
@@ -2,12 +2,16 @@
                       "http://www.w3.org/TR/html4/strict.dtd">
 <html>
 <head>
-  <title>Documentation for the LLVM System</title>
+  <title>Documentation for the LLVM System at SVN head</title>
   <link rel="stylesheet" href="llvm.css" type="text/css">
 </head>
 <body>
 
-<div class="doc_title">Documentation for the LLVM System</div>
+<div class="doc_title">Documentation for the LLVM System at SVN head</div>
+
+<p class="doc_warning">If you are using a released version of LLVM,
+see <a href="http://llvm.org/releases/">the download page</a> to find
+your documentation.</p>
 
 <div class="doc_text">
 <table class="layout" width="95%"><tr class="layout"><td class="left">
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl1.html b/libclamav/c++/llvm/docs/tutorial/LangImpl1.html
index 5e1786c..66843db 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl1.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl1.html
@@ -342,7 +342,7 @@ so that you can use the lexer and parser together.
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl2.html b/libclamav/c++/llvm/docs/tutorial/LangImpl2.html
index 5bcd0dd..9c13b48 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl2.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl2.html
@@ -1227,7 +1227,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl3.html b/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
index e3d2117..39ec628 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl3.html
@@ -1263,7 +1263,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2009-07-21 11:05:13 -0700 (Tue, 21 Jul 2009) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl4.html b/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
index 728d518..70fd673 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl4.html
@@ -171,10 +171,7 @@ add a set of optimizations to run.  The code looks like this:</p>
 
 <div class="doc_code">
 <pre>
-  ExistingModuleProvider *OurModuleProvider =
-      new ExistingModuleProvider(TheModule);
-
-  FunctionPassManager OurFPM(OurModuleProvider);
+  FunctionPassManager OurFPM(TheModule);
 
   // Set up the optimizer pipeline.  Start with registering info about how the
   // target lays out data structures.
@@ -198,19 +195,13 @@ add a set of optimizations to run.  The code looks like this:</p>
 </pre>
 </div>
 
-<p>This code defines two objects, an <tt>ExistingModuleProvider</tt> and a
-<tt>FunctionPassManager</tt>.  The former is basically a wrapper around our
-<tt>Module</tt> that the PassManager requires.  It provides certain flexibility
-that we're not going to take advantage of here, so I won't dive into any details 
-about it.</p>
-
-<p>The meat of the matter here, is the definition of "<tt>OurFPM</tt>".  It
-requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
-to construct itself.  Once it is set up, we use a series of "add" calls to add
-a bunch of LLVM passes.  The first pass is basically boilerplate, it adds a pass
-so that later optimizations know how the data structures in the program are
-laid out.  The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
-which we will get to in the next section.</p>
+<p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>".  It
+requires a pointer to the <tt>Module</tt> to construct itself.  Once it is set
+up, we use a series of "add" calls to add a bunch of LLVM passes.  The first
+pass is basically boilerplate, it adds a pass so that later optimizations know
+how the data structures in the program are laid out.  The
+"<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
+to in the next section.</p>
 
 <p>In this case, we choose to add 4 optimization passes.  The passes we chose
 here are a pretty standard set of "cleanup" optimizations that are useful for
@@ -302,8 +293,8 @@ by adding a global variable and a call in <tt>main</tt>:</p>
 ...
 int main() {
   ..
-  <b>// Create the JIT.  This takes ownership of the module and module provider.
-  TheExecutionEngine = EngineBuilder(OurModuleProvider).create();</b>
+  <b>// Create the JIT.  This takes ownership of the module.
+  TheExecutionEngine = EngineBuilder(TheModule).create();</b>
   ..
 }
 </pre>
@@ -494,7 +485,7 @@ LLVM JIT and optimizer.  To build this example, use:
 <div class="doc_code">
 <pre>
    # Compile
-   g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit interpreter native` -O3 -o toy
+   g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
    # Run
    ./toy
 </pre>
@@ -511,11 +502,9 @@ at runtime.</p>
 <pre>
 #include "llvm/DerivedTypes.h"
 #include "llvm/ExecutionEngine/ExecutionEngine.h"
-#include "llvm/ExecutionEngine/Interpreter.h"
 #include "llvm/ExecutionEngine/JIT.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/PassManager.h"
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Target/TargetData.h"
@@ -1084,13 +1073,15 @@ int main() {
   // Make the module, which holds all the code.
   TheModule = new Module("my cool jit", Context);
 
-  ExistingModuleProvider *OurModuleProvider =
-      new ExistingModuleProvider(TheModule);
-
-  // Create the JIT.  This takes ownership of the module and module provider.
-  TheExecutionEngine = EngineBuilder(OurModuleProvider).create();
+  // Create the JIT.  This takes ownership of the module.
+  std::string ErrStr;
+  TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
+  if (!TheExecutionEngine) {
+    fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
+    exit(1);
+  }
 
-  FunctionPassManager OurFPM(OurModuleProvider);
+  FunctionPassManager OurFPM(TheModule);
 
   // Set up the optimizer pipeline.  Start with registering info about how the
   // target lays out data structures.
@@ -1135,7 +1126,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl5.html b/libclamav/c++/llvm/docs/tutorial/LangImpl5.html
index f93b59b..2b0450f 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl5.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl5.html
@@ -902,11 +902,9 @@ if/then/else and for expressions..  To build this example, use:
 <pre>
 #include "llvm/DerivedTypes.h"
 #include "llvm/ExecutionEngine/ExecutionEngine.h"
-#include "llvm/ExecutionEngine/Interpreter.h"
 #include "llvm/ExecutionEngine/JIT.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/PassManager.h"
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Target/TargetData.h"
@@ -1720,13 +1718,15 @@ int main() {
   // Make the module, which holds all the code.
   TheModule = new Module("my cool jit", Context);
 
-  ExistingModuleProvider *OurModuleProvider =
-      new ExistingModuleProvider(TheModule);
-
-  // Create the JIT.  This takes ownership of the module and module provider.
-  TheExecutionEngine = EngineBuilder(OurModuleProvider).create();
+  // Create the JIT.  This takes ownership of the module.
+  std::string ErrStr;
+  TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
+  if (!TheExecutionEngine) {
+    fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
+    exit(1);
+  }
 
-  FunctionPassManager OurFPM(OurModuleProvider);
+  FunctionPassManager OurFPM(TheModule);
 
   // Set up the optimizer pipeline.  Start with registering info about how the
   // target lays out data structures.
@@ -1771,7 +1771,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl6.html b/libclamav/c++/llvm/docs/tutorial/LangImpl6.html
index f113e96..5fae906 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl6.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl6.html
@@ -821,11 +821,9 @@ if/then/else and for expressions..  To build this example, use:
 <pre>
 #include "llvm/DerivedTypes.h"
 #include "llvm/ExecutionEngine/ExecutionEngine.h"
-#include "llvm/ExecutionEngine/Interpreter.h"
 #include "llvm/ExecutionEngine/JIT.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/PassManager.h"
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Target/TargetData.h"
@@ -1757,13 +1755,15 @@ int main() {
   // Make the module, which holds all the code.
   TheModule = new Module("my cool jit", Context);
 
-  ExistingModuleProvider *OurModuleProvider =
-      new ExistingModuleProvider(TheModule);
-
-  // Create the JIT.  This takes ownership of the module and module provider.
-  TheExecutionEngine = EngineBuilder(OurModuleProvider).create();
+  // Create the JIT.  This takes ownership of the module.
+  std::string ErrStr;
+  TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
+  if (!TheExecutionEngine) {
+    fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
+    exit(1);
+  }
 
-  FunctionPassManager OurFPM(OurModuleProvider);
+  FunctionPassManager OurFPM(TheModule);
 
   // Set up the optimizer pipeline.  Start with registering info about how the
   // target lays out data structures.
@@ -1808,7 +1808,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl7.html b/libclamav/c++/llvm/docs/tutorial/LangImpl7.html
index ec07fa8..f0a03c3 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl7.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl7.html
@@ -1004,11 +1004,9 @@ variables and var/in support.  To build this example, use:
 <pre>
 #include "llvm/DerivedTypes.h"
 #include "llvm/ExecutionEngine/ExecutionEngine.h"
-#include "llvm/ExecutionEngine/Interpreter.h"
 #include "llvm/ExecutionEngine/JIT.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/PassManager.h"
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Target/TargetData.h"
@@ -2105,13 +2103,15 @@ int main() {
   // Make the module, which holds all the code.
   TheModule = new Module("my cool jit", Context);
 
-  ExistingModuleProvider *OurModuleProvider =
-      new ExistingModuleProvider(TheModule);
-
-  // Create the JIT.  This takes ownership of the module and module provider.
-  TheExecutionEngine = EngineBuilder(OurModuleProvider).create();
+  // Create the JIT.  This takes ownership of the module.
+  std::string ErrStr;
+  TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&amp;ErrStr).create();
+  if (!TheExecutionEngine) {
+    fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
+    exit(1);
+  }
 
-  FunctionPassManager OurFPM(OurModuleProvider);
+  FunctionPassManager OurFPM(TheModule);
 
   // Set up the optimizer pipeline.  Start with registering info about how the
   // target lays out data structures.
@@ -2158,7 +2158,7 @@ int main() {
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/LangImpl8.html b/libclamav/c++/llvm/docs/tutorial/LangImpl8.html
index 855b8f3..64a6200 100644
--- a/libclamav/c++/llvm/docs/tutorial/LangImpl8.html
+++ b/libclamav/c++/llvm/docs/tutorial/LangImpl8.html
@@ -359,7 +359,7 @@ Passing Style</a> and the use of tail calls (which LLVM also supports).</p>
 
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl1.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl1.html
index 3c0fd8b..98c1124 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl1.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl1.html
@@ -359,7 +359,7 @@ include a driver so that you can use the lexer and parser together.
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl2.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl2.html
index 7d60aa6..6665109 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl2.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl2.html
@@ -1039,7 +1039,7 @@ main ()
   <a href="mailto:sabre at nondot.org">Chris Lattner</a>
   <a href="mailto:erickt at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
index a598875..f3814c8 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl3.html
@@ -1085,7 +1085,7 @@ main ()
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
index 543e12f..534502d 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl4.html
@@ -1032,7 +1032,7 @@ extern double putchard(double X) {
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl5.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl5.html
index f19e900..01e1255 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl5.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl5.html
@@ -1563,7 +1563,7 @@ operators</a>
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl6.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl6.html
index 2edb22e..b5606e7 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl6.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl6.html
@@ -1568,7 +1568,7 @@ SSA construction</a>
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl7.html b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl7.html
index 0776821..aff97c4 100644
--- a/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl7.html
+++ b/libclamav/c++/llvm/docs/tutorial/OCamlLangImpl7.html
@@ -1901,7 +1901,7 @@ extern double printd(double X) {
   <a href="mailto:sabre at nondot.org">Chris Lattner</a><br>
   <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
   <a href="mailto:idadesub at users.sourceforge.net">Erick Tryzelaar</a><br>
-  Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
+  Last modified: $Date$
 </address>
 </body>
 </html>
diff --git a/libclamav/c++/llvm/include/llvm-c/Core.h b/libclamav/c++/llvm/include/llvm-c/Core.h
index d57c250..4500fcc 100644
--- a/libclamav/c++/llvm/include/llvm-c/Core.h
+++ b/libclamav/c++/llvm/include/llvm-c/Core.h
@@ -78,8 +78,9 @@ typedef struct LLVMOpaqueValue *LLVMValueRef;
 typedef struct LLVMOpaqueBasicBlock *LLVMBasicBlockRef;
 typedef struct LLVMOpaqueBuilder *LLVMBuilderRef;
 
-/* Used to provide a module to JIT or interpreter.
- * See the llvm::ModuleProvider class.
+/* Interface used to provide a module to JIT or interpreter.  This is now just a
+ * synonym for llvm::Module, but we have to keep using the different type to
+ * keep binary compatibility.
  */
 typedef struct LLVMOpaqueModuleProvider *LLVMModuleProviderRef;
 
@@ -117,7 +118,8 @@ typedef enum {
     LLVMNoCaptureAttribute  = 1<<21,
     LLVMNoRedZoneAttribute  = 1<<22,
     LLVMNoImplicitFloatAttribute = 1<<23,
-    LLVMNakedAttribute      = 1<<24
+    LLVMNakedAttribute      = 1<<24,
+    LLVMInlineHintAttribute = 1<<25
 } LLVMAttribute;
 
 typedef enum {
@@ -191,7 +193,8 @@ typedef enum {
   LLVMPointerTypeKind,     /**< Pointers */
   LLVMOpaqueTypeKind,      /**< Opaque: type with unknown structure */
   LLVMVectorTypeKind,      /**< SIMD 'packed' format, or other vector type */
-  LLVMMetadataTypeKind     /**< Metadata */
+  LLVMMetadataTypeKind,    /**< Metadata */
+  LLVMUnionTypeKind        /**< Unions */
 } LLVMTypeKind;
 
 typedef enum {
@@ -210,8 +213,7 @@ typedef enum {
   LLVMDLLImportLinkage,   /**< Function to be imported from DLL */
   LLVMDLLExportLinkage,   /**< Function to be accessible from DLL */
   LLVMExternalWeakLinkage,/**< ExternalWeak linkage description */
-  LLVMGhostLinkage,       /**< Stand-in functions for streaming fns from
-                               bitcode */
+  LLVMGhostLinkage,       /**< Obsolete */
   LLVMCommonLinkage,      /**< Tentative definitions */
   LLVMLinkerPrivateLinkage /**< Like Private, but linker removes. */
 } LLVMLinkage;
@@ -371,6 +373,13 @@ unsigned LLVMCountStructElementTypes(LLVMTypeRef StructTy);
 void LLVMGetStructElementTypes(LLVMTypeRef StructTy, LLVMTypeRef *Dest);
 LLVMBool LLVMIsPackedStruct(LLVMTypeRef StructTy);
 
+/* Operations on union types */
+LLVMTypeRef LLVMUnionTypeInContext(LLVMContextRef C, LLVMTypeRef *ElementTypes,
+                                   unsigned ElementCount);
+LLVMTypeRef LLVMUnionType(LLVMTypeRef *ElementTypes, unsigned ElementCount);
+unsigned LLVMCountUnionElementTypes(LLVMTypeRef UnionTy);
+void LLVMGetUnionElementTypes(LLVMTypeRef UnionTy, LLVMTypeRef *Dest);
+
 /* Operations on array, pointer, and vector types (sequence types) */
 LLVMTypeRef LLVMArrayType(LLVMTypeRef ElementType, unsigned ElementCount);
 LLVMTypeRef LLVMPointerType(LLVMTypeRef ElementType, unsigned AddressSpace);
@@ -914,17 +923,15 @@ LLVMValueRef LLVMBuildPtrDiff(LLVMBuilderRef, LLVMValueRef LHS,
 
 /*===-- Module providers --------------------------------------------------===*/
 
-/* Encapsulates the module M in a module provider, taking ownership of the
- * module.
- * See the constructor llvm::ExistingModuleProvider::ExistingModuleProvider.
+/* Changes the type of M so it can be passed to FunctionPassManagers and the
+ * JIT.  They take ModuleProviders for historical reasons.
  */
 LLVMModuleProviderRef
 LLVMCreateModuleProviderForExistingModule(LLVMModuleRef M);
 
-/* Destroys the module provider MP as well as the contained module.
- * See the destructor llvm::ModuleProvider::~ModuleProvider.
+/* Destroys the module M.
  */
-void LLVMDisposeModuleProvider(LLVMModuleProviderRef MP);
+void LLVMDisposeModuleProvider(LLVMModuleProviderRef M);
 
 
 /*===-- Memory buffers ----------------------------------------------------===*/
@@ -981,7 +988,6 @@ void LLVMDisposePassManager(LLVMPassManagerRef PM);
 }
 
 namespace llvm {
-  class ModuleProvider;
   class MemoryBuffer;
   class PassManagerBase;
   
@@ -1018,11 +1024,16 @@ namespace llvm {
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(BasicBlock,         LLVMBasicBlockRef    )
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(IRBuilder<>,        LLVMBuilderRef       )
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(PATypeHolder,       LLVMTypeHandleRef    )
-  DEFINE_SIMPLE_CONVERSION_FUNCTIONS(ModuleProvider,     LLVMModuleProviderRef)
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(MemoryBuffer,       LLVMMemoryBufferRef  )
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(LLVMContext,        LLVMContextRef       )
   DEFINE_SIMPLE_CONVERSION_FUNCTIONS(Use,                LLVMUseIteratorRef           )
   DEFINE_STDCXX_CONVERSION_FUNCTIONS(PassManagerBase,    LLVMPassManagerRef   )
+  /* LLVMModuleProviderRef exists for historical reasons, but now just holds a
+   * Module.
+   */
+  inline Module *unwrap(LLVMModuleProviderRef MP) {
+    return reinterpret_cast<Module*>(MP);
+  }
   
   #undef DEFINE_STDCXX_CONVERSION_FUNCTIONS
   #undef DEFINE_ISA_CONVERSION_FUNCTIONS
diff --git a/libclamav/c++/llvm/include/llvm-c/EnhancedDisassembly.h b/libclamav/c++/llvm/include/llvm-c/EnhancedDisassembly.h
new file mode 100644
index 0000000..9cd1e1f
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm-c/EnhancedDisassembly.h
@@ -0,0 +1,515 @@
+/*===-- llvm-c/EnhancedDisassembly.h - Disassembler C Interface ---*- C -*-===*\
+|*                                                                            *|
+|*                     The LLVM Compiler Infrastructure                       *|
+|*                                                                            *|
+|* This file is distributed under the University of Illinois Open Source      *|
+|* License. See LICENSE.TXT for details.                                      *|
+|*                                                                            *|
+|*===----------------------------------------------------------------------===*|
+|*                                                                            *|
+|* This header declares the C interface to EnhancedDisassembly.so, which      *|
+|* implements a disassembler with the ability to extract operand values and   *|
+|* individual tokens from assembly instructions.                              *|
+|*                                                                            *|
+|* The header declares additional interfaces if the host compiler supports    *|
+|* the blocks API.                                                            *|
+|*                                                                            *|
+\*===----------------------------------------------------------------------===*/
+
+#ifndef LLVM_C_ENHANCEDDISASSEMBLY_H
+#define LLVM_C_ENHANCEDDISASSEMBLY_H
+
+#include "llvm/System/DataTypes.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*!
+ @typedef EDByteReaderCallback
+ Interface to memory from which instructions may be read.
+ @param byte A pointer whose target should be filled in with the data returned.
+ @param address The address of the byte to be read.
+ @param arg An anonymous argument for client use.
+ @result 0 on success; -1 otherwise.
+ */
+typedef int (*EDByteReaderCallback)(uint8_t *byte, uint64_t address, void *arg);
+
+/*!
+ @typedef EDRegisterReaderCallback
+ Interface to registers from which registers may be read.
+ @param value A pointer whose target should be filled in with the value of the
+   register.
+ @param regID The LLVM register identifier for the register to read.
+ @param arg An anonymous argument for client use.
+ @result 0 if the register could be read; -1 otherwise.
+ */
+typedef int (*EDRegisterReaderCallback)(uint64_t *value, unsigned regID, 
+                                        void* arg);
+
+/*!
+ @typedef EDAssemblySyntax_t
+ An assembly syntax for use in tokenizing instructions.
+ */
+typedef enum {
+/*! @constant kEDAssemblySyntaxX86Intel Intel syntax for i386 and x86_64. */
+  kEDAssemblySyntaxX86Intel  = 0,
+/*! @constant kEDAssemblySyntaxX86ATT AT&T syntax for i386 and x86_64. */
+  kEDAssemblySyntaxX86ATT    = 1
+} EDAssemblySyntax_t;
+
+/*!
+ @typedef EDDisassemblerRef
+ Encapsulates a disassembler for a single CPU architecture.
+ */
+struct EDDisassembler;
+typedef struct EDDisassembler *EDDisassemblerRef;
+
+/*!
+ @typedef EDInstRef
+ Encapsulates a single disassembled instruction in one assembly syntax.
+ */
+struct EDInst;
+typedef struct EDInst *EDInstRef;
+
+/*!
+ @typedef EDTokenRef
+ Encapsulates a token from the disassembly of an instruction.
+ */
+struct EDToken;
+typedef struct EDToken *EDTokenRef;
+
+/*!
+ @typedef EDOperandRef
+ Encapsulates an operand of an instruction.
+ */
+struct EDOperand;
+typedef struct EDOperand *EDOperandRef;
+  
+/*!
+ @functiongroup Getting a disassembler
+ */
+
+/*!
+ @function EDGetDisassembler
+ Gets the disassembler for a given target.
+ @param disassembler A pointer whose target will be filled in with the 
+   disassembler.
+ @param triple Identifies the target.  Example: "x86_64-apple-darwin10"
+ @param syntax The assembly syntax to use when decoding instructions.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetDisassembler(EDDisassemblerRef *disassembler,
+                      const char *triple,
+                      EDAssemblySyntax_t syntax);
+
+/*!
+ @functiongroup Generic architectural queries
+ */
+  
+/*!
+ @function EDGetRegisterName
+ Gets the human-readable name for a given register.
+ @param regName A pointer whose target will be pointed at the name of the
+   register.  The name does not need to be deallocated and will be 
+ @param disassembler The disassembler to query for the name.
+ @param regID The register identifier, as returned by EDRegisterTokenValue.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetRegisterName(const char** regName,
+                      EDDisassemblerRef disassembler,
+                      unsigned regID);
+  
+/*!
+ @function EDRegisterIsStackPointer
+ Determines if a register is one of the platform's stack-pointer registers.
+ @param disassembler The disassembler to query.
+ @param regID The register identifier, as returned by EDRegisterTokenValue.
+ @result 1 if true; 0 otherwise.
+ */
+int EDRegisterIsStackPointer(EDDisassemblerRef disassembler,
+                             unsigned regID);
+
+/*!
+ @function EDRegisterIsProgramCounter
+ Determines if a register is one of the platform's stack-pointer registers.
+ @param disassembler The disassembler to query.
+ @param regID The register identifier, as returned by EDRegisterTokenValue.
+ @result 1 if true; 0 otherwise.
+ */
+int EDRegisterIsProgramCounter(EDDisassemblerRef disassembler,
+                               unsigned regID);
+  
+/*!
+ @functiongroup Creating and querying instructions
+ */
+  
+/*!
+ @function EDCreateInst
+ Gets a set of contiguous instructions from a disassembler.
+ @param insts A pointer to an array that will be filled in with the
+   instructions.  Must have at least count entries.  Entries not filled in will 
+   be set to NULL.
+ @param count The maximum number of instructions to fill in.
+ @param disassembler The disassembler to use when decoding the instructions.
+ @param byteReader The function to use when reading the instruction's machine
+   code.
+ @param address The address of the first byte of the instruction.
+ @param arg An anonymous argument to be passed to byteReader.
+ @result The number of instructions read on success; 0 otherwise.
+ */
+unsigned int EDCreateInsts(EDInstRef *insts,
+                           unsigned int count,
+                           EDDisassemblerRef disassembler,
+                           EDByteReaderCallback byteReader,
+                           uint64_t address,
+                           void *arg);
+
+/*!
+ @function EDReleaseInst
+ Frees the memory for an instruction.  The instruction can no longer be accessed
+ after this call.
+ @param inst The instruction to be freed.
+ */
+void EDReleaseInst(EDInstRef inst);
+
+/*!
+ @function EDInstByteSize
+ @param inst The instruction to be queried.
+ @result The number of bytes in the instruction's machine-code representation.
+ */
+int EDInstByteSize(EDInstRef inst);
+
+/*!
+ @function EDGetInstString
+ Gets the disassembled text equivalent of the instruction.
+ @param buf A pointer whose target will be filled in with a pointer to the
+   string.  (The string becomes invalid when the instruction is released.)
+ @param inst The instruction to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetInstString(const char **buf,
+                    EDInstRef inst);
+
+/*!
+ @function EDInstID
+ @param instID A pointer whose target will be filled in with the LLVM identifier
+   for the instruction.
+ @param inst The instruction to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDInstID(unsigned *instID, EDInstRef inst);
+  
+/*!
+ @function EDInstIsBranch
+ @param inst The instruction to be queried.
+ @result 1 if the instruction is a branch instruction; 0 if it is some other
+   type of instruction; -1 if there was an error.
+ */
+int EDInstIsBranch(EDInstRef inst);
+
+/*!
+ @function EDInstIsMove
+ @param inst The instruction to be queried.
+ @result 1 if the instruction is a move instruction; 0 if it is some other
+   type of instruction; -1 if there was an error.
+ */
+int EDInstIsMove(EDInstRef inst);
+
+/*!
+ @function EDBranchTargetID
+ @param inst The instruction to be queried.
+ @result The ID of the branch target operand, suitable for use with 
+   EDCopyOperand.  -1 if no such operand exists.
+ */
+int EDBranchTargetID(EDInstRef inst);
+
+/*!
+ @function EDMoveSourceID
+ @param inst The instruction to be queried.
+ @result The ID of the move source operand, suitable for use with 
+   EDCopyOperand.  -1 if no such operand exists.
+ */
+int EDMoveSourceID(EDInstRef inst);
+
+/*!
+ @function EDMoveTargetID
+ @param inst The instruction to be queried.
+ @result The ID of the move source operand, suitable for use with 
+   EDCopyOperand.  -1 if no such operand exists.
+ */
+int EDMoveTargetID(EDInstRef inst);
+
+/*!
+ @functiongroup Creating and querying tokens
+ */
+  
+/*!
+ @function EDNumTokens
+ @param inst The instruction to be queried.
+ @result The number of tokens in the instruction, or -1 on error.
+ */
+int EDNumTokens(EDInstRef inst);
+
+/*!
+ @function EDGetToken
+ Retrieves a token from an instruction.  The token is valid until the
+ instruction is released.
+ @param token A pointer to be filled in with the token.
+ @param inst The instruction to be queried.
+ @param index The index of the token in the instruction.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetToken(EDTokenRef *token,
+               EDInstRef inst,
+               int index);
+  
+/*!
+ @function EDGetTokenString
+ Gets the disassembled text for a token.
+ @param buf A pointer whose target will be filled in with a pointer to the
+   string.  (The string becomes invalid when the token is released.)
+ @param token The token to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetTokenString(const char **buf,
+                     EDTokenRef token);
+
+/*!
+ @function EDOperandIndexForToken
+ Returns the index of the operand to which a token belongs.
+ @param token The token to be queried.
+ @result The operand index on success; -1 otherwise
+ */
+int EDOperandIndexForToken(EDTokenRef token);
+
+/*!
+ @function EDTokenIsWhitespace
+ @param token The token to be queried.
+ @result 1 if the token is whitespace; 0 if not; -1 on error.
+ */
+int EDTokenIsWhitespace(EDTokenRef token);
+  
+/*!
+ @function EDTokenIsPunctuation
+ @param token The token to be queried.
+ @result 1 if the token is punctuation; 0 if not; -1 on error.
+ */
+int EDTokenIsPunctuation(EDTokenRef token);
+
+/*!
+ @function EDTokenIsOpcode
+ @param token The token to be queried.
+ @result 1 if the token is opcode; 0 if not; -1 on error.
+ */
+int EDTokenIsOpcode(EDTokenRef token);
+
+/*!
+ @function EDTokenIsLiteral
+ @param token The token to be queried.
+ @result 1 if the token is a numeric literal; 0 if not; -1 on error.
+ */
+int EDTokenIsLiteral(EDTokenRef token);
+
+/*!
+ @function EDTokenIsRegister
+ @param token The token to be queried.
+ @result 1 if the token identifies a register; 0 if not; -1 on error.
+ */
+int EDTokenIsRegister(EDTokenRef token);
+
+/*!
+ @function EDTokenIsNegativeLiteral
+ @param token The token to be queried.
+ @result 1 if the token is a negative signed literal; 0 if not; -1 on error.
+ */
+int EDTokenIsNegativeLiteral(EDTokenRef token);
+
+/*!
+ @function EDLiteralTokenAbsoluteValue
+ @param value A pointer whose target will be filled in with the absolute value
+   of the literal.
+ @param token The token to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDLiteralTokenAbsoluteValue(uint64_t *value,
+                                EDTokenRef token);
+
+/*!
+ @function EDRegisterTokenValue
+ @param registerID A pointer whose target will be filled in with the LLVM 
+   register identifier for the token.
+ @param token The token to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDRegisterTokenValue(unsigned *registerID,
+                         EDTokenRef token);
+  
+/*!
+ @functiongroup Creating and querying operands
+ */
+  
+/*!
+ @function EDNumOperands
+ @param inst The instruction to be queried.
+ @result The number of operands in the instruction, or -1 on error.
+ */
+int EDNumOperands(EDInstRef inst);
+
+/*!
+ @function EDGetOperand
+ Retrieves an operand from an instruction.  The operand is valid until the
+ instruction is released.
+ @param operand A pointer to be filled in with the operand.
+ @param inst The instruction to be queried.
+ @param index The index of the operand in the instruction.
+ @result 0 on success; -1 otherwise.
+ */
+int EDGetOperand(EDOperandRef *operand,
+                 EDInstRef inst,
+                 int index);
+  
+/*!
+ @function EDOperandIsRegister
+ @param operand The operand to be queried.
+ @result 1 if the operand names a register; 0 if not; -1 on error.
+ */
+int EDOperandIsRegister(EDOperandRef operand);
+
+/*!
+ @function EDOperandIsImmediate
+ @param operand The operand to be queried.
+ @result 1 if the operand specifies an immediate value; 0 if not; -1 on error.
+ */
+int EDOperandIsImmediate(EDOperandRef operand);
+
+/*!
+ @function EDOperandIsMemory
+ @param operand The operand to be queried.
+ @result 1 if the operand specifies a location in memory; 0 if not; -1 on error.
+ */
+int EDOperandIsMemory(EDOperandRef operand);
+
+/*!
+ @function EDRegisterOperandValue
+ @param value A pointer whose target will be filled in with the LLVM register ID
+   of the register named by the operand.  
+ @param operand The operand to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDRegisterOperandValue(unsigned *value,
+                           EDOperandRef operand);
+  
+/*!
+ @function EDImmediateOperandValue
+ @param value A pointer whose target will be filled in with the value of the
+   immediate.
+ @param operand The operand to be queried.
+ @result 0 on success; -1 otherwise.
+ */
+int EDImmediateOperandValue(uint64_t *value,
+                            EDOperandRef operand);
+
+/*!
+ @function EDEvaluateOperand
+ Evaluates an operand using a client-supplied register state accessor.  Register
+ operands are evaluated by reading the value of the register; immediate operands
+ are evaluated by reporting the immediate value; memory operands are evaluated
+ by computing the target address (with only those relocations applied that were
+ already applied to the original bytes).
+ @param result A pointer whose target is to be filled with the result of
+   evaluating the operand.
+ @param operand The operand to be evaluated.
+ @param regReader The function to use when reading registers from the register
+   state.
+ @param arg An anonymous argument for client use.
+ @result 0 if the operand could be evaluated; -1 otherwise.
+ */
+int EDEvaluateOperand(uint64_t *result,
+                      EDOperandRef operand,
+                      EDRegisterReaderCallback regReader,
+                      void *arg);
+  
+#ifdef __BLOCKS__
+
+/*!
+ @typedef EDByteBlock_t
+ Block-based interface to memory from which instructions may be read.
+ @param byte A pointer whose target should be filled in with the data returned.
+ @param address The address of the byte to be read.
+ @result 0 on success; -1 otherwise.
+ */
+typedef int (^EDByteBlock_t)(uint8_t *byte, uint64_t address);
+
+/*!
+ @typedef EDRegisterBlock_t
+ Block-based interface to registers from which registers may be read.
+ @param value A pointer whose target should be filled in with the value of the
+   register.
+ @param regID The LLVM register identifier for the register to read.
+ @result 0 if the register could be read; -1 otherwise.
+ */
+typedef int (^EDRegisterBlock_t)(uint64_t *value, unsigned regID);
+
+/*!
+ @typedef EDTokenVisitor_t
+ Block-based handler for individual tokens.
+ @param token The current token being read.
+ @result 0 to continue; 1 to stop normally; -1 on error.
+ */
+typedef int (^EDTokenVisitor_t)(EDTokenRef token);
+
+/*! @functiongroup Block-based interfaces */
+  
+/*!
+ @function EDBlockCreateInsts
+ Gets a set of contiguous instructions from a disassembler, using a block to
+ read memory.
+ @param insts A pointer to an array that will be filled in with the
+   instructions.  Must have at least count entries.  Entries not filled in will 
+   be set to NULL.
+ @param count The maximum number of instructions to fill in.
+ @param disassembler The disassembler to use when decoding the instructions.
+ @param byteBlock The block to use when reading the instruction's machine
+   code.
+ @param address The address of the first byte of the instruction.
+ @result The number of instructions read on success; 0 otherwise.
+ */
+unsigned int EDBlockCreateInsts(EDInstRef *insts,
+                                int count,
+                                EDDisassemblerRef disassembler,
+                                EDByteBlock_t byteBlock,
+                                uint64_t address);
+
+/*!
+ @function EDBlockEvaluateOperand
+ Evaluates an operand using a block to read registers.
+ @param result A pointer whose target is to be filled with the result of
+   evaluating the operand.
+ @param operand The operand to be evaluated.
+ @param regBlock The block to use when reading registers from the register
+   state.
+ @result 0 if the operand could be evaluated; -1 otherwise.
+ */
+int EDBlockEvaluateOperand(uint64_t *result,
+                           EDOperandRef operand,
+                           EDRegisterBlock_t regBlock);
+
+/*!
+ @function EDBlockVisitTokens
+ Visits every token with a visitor.
+ @param inst The instruction with the tokens to be visited.
+ @param visitor The visitor.
+ @result 0 if the visit ended normally; -1 if the visitor encountered an error
+   or there was some other error.
+ */
+int EDBlockVisitTokens(EDInstRef inst,
+                       EDTokenVisitor_t visitor);
+
+#endif
+  
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/ADT/BitVector.h b/libclamav/c++/llvm/include/llvm/ADT/BitVector.h
index 45108c8..b9f2d83 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/BitVector.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/BitVector.h
@@ -307,15 +307,17 @@ public:
   }
 
   BitVector &operator|=(const BitVector &RHS) {
-    assert(Size == RHS.Size && "Illegal operation!");
-    for (unsigned i = 0; i < NumBitWords(size()); ++i)
+    if (size() < RHS.size())
+      resize(RHS.size());
+    for (size_t i = 0, e = NumBitWords(RHS.size()); i != e; ++i)
       Bits[i] |= RHS.Bits[i];
     return *this;
   }
 
   BitVector &operator^=(const BitVector &RHS) {
-    assert(Size == RHS.Size && "Illegal operation!");
-    for (unsigned i = 0; i < NumBitWords(size()); ++i)
+    if (size() < RHS.size())
+      resize(RHS.size());
+    for (size_t i = 0, e = NumBitWords(RHS.size()); i != e; ++i)
       Bits[i] ^= RHS.Bits[i];
     return *this;
   }
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
index 8b161ea..7350906 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseMap.h
@@ -359,7 +359,7 @@ private:
     BucketT *OldBuckets = Buckets;
 
     // Double the number of buckets.
-    while (NumBuckets <= AtLeast)
+    while (NumBuckets < AtLeast)
       NumBuckets <<= 1;
     NumTombstones = 0;
     Buckets = static_cast<BucketT*>(operator new(sizeof(BucketT)*NumBuckets));
diff --git a/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h b/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
index 89f55ca..0898b96 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/DenseSet.h
@@ -41,8 +41,8 @@ public:
     return TheMap.count(V);
   }
 
-  void erase(const ValueT &V) {
-    TheMap.erase(V);
+  bool erase(const ValueT &V) {
+    return TheMap.erase(V);
   }
 
   DenseSet &operator=(const DenseSet &RHS) {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableIntervalMap.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableIntervalMap.h
new file mode 100644
index 0000000..f33fb1e
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableIntervalMap.h
@@ -0,0 +1,238 @@
+//===--- ImmutableIntervalMap.h - Immutable (functional) map  ---*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the ImmutableIntervalMap class.
+//
+//===----------------------------------------------------------------------===//
+#include "llvm/ADT/ImmutableMap.h"
+
+namespace llvm {
+
+class Interval {
+private:
+  uint64_t Start;
+  uint64_t End;
+
+public:
+  Interval(uint64_t S, uint64_t E) : Start(S), End(E) {}
+
+  uint64_t getStart() const { return Start; }
+  uint64_t getEnd() const { return End; }
+};
+
+template <typename T>
+struct ImutIntervalInfo {
+  typedef const std::pair<Interval, T> value_type;
+  typedef const value_type &value_type_ref;
+  typedef const Interval key_type;
+  typedef const Interval &key_type_ref;
+  typedef const T data_type;
+  typedef const T &data_type_ref;
+
+  static key_type_ref KeyOfValue(value_type_ref V) {
+    return V.first;
+  }
+
+  static data_type_ref DataOfValue(value_type_ref V) {
+    return V.second;
+  }
+
+  static bool isEqual(key_type_ref L, key_type_ref R) {
+    return L.getStart() == R.getStart() && L.getEnd() == R.getEnd();
+  }
+
+  static bool isDataEqual(data_type_ref L, data_type_ref R) {
+    return ImutContainerInfo<T>::isEqual(L,R);
+  }
+
+  static bool isLess(key_type_ref L, key_type_ref R) {
+    // Assume L and R does not overlap.
+    if (L.getStart() < R.getStart()) {
+      assert(L.getEnd() < R.getStart());
+      return true;
+    } else if (L.getStart() == R.getStart()) {
+      assert(L.getEnd() == R.getEnd());
+      return false;
+    } else {
+      assert(L.getStart() > R.getEnd());
+      return false;
+    }
+  }
+
+  static bool isContainedIn(key_type_ref K, key_type_ref L) {
+    if (K.getStart() >= L.getStart() && K.getEnd() <= L.getEnd())
+      return true;
+    else
+      return false;
+  }
+
+  static void Profile(FoldingSetNodeID &ID, value_type_ref V) {
+    ID.AddInteger(V.first.getStart());
+    ID.AddInteger(V.first.getEnd());
+    ImutProfileInfo<T>::Profile(ID, V.second);
+  }
+};
+
+template <typename ImutInfo>
+class ImutIntervalAVLFactory : public ImutAVLFactory<ImutInfo> {
+  typedef ImutAVLTree<ImutInfo> TreeTy;
+  typedef typename ImutInfo::value_type     value_type;
+  typedef typename ImutInfo::value_type_ref value_type_ref;
+  typedef typename ImutInfo::key_type       key_type;
+  typedef typename ImutInfo::key_type_ref   key_type_ref;
+  typedef typename ImutInfo::data_type      data_type;
+  typedef typename ImutInfo::data_type_ref  data_type_ref;
+
+public:
+  ImutIntervalAVLFactory(BumpPtrAllocator &Alloc) 
+    : ImutAVLFactory<ImutInfo>(Alloc) {}
+
+  TreeTy *Add(TreeTy *T, value_type_ref V) {
+    T = Add_internal(V,T);
+    this->MarkImmutable(T);
+    return T;
+  }
+
+  TreeTy *Find(TreeTy *T, key_type_ref K) {
+    if (!T)
+      return NULL;
+
+    key_type_ref CurrentKey = ImutInfo::KeyOfValue(this->Value(T));
+
+    if (ImutInfo::isContainedIn(K, CurrentKey))
+      return T;
+    else if (ImutInfo::isLess(K, CurrentKey))
+      return Find(this->Left(T), K);
+    else
+      return Find(this->Right(T), K);
+  }
+
+private:
+  TreeTy *Add_internal(value_type_ref V, TreeTy *T) {
+    key_type_ref K = ImutInfo::KeyOfValue(V);
+    T = RemoveAllOverlaps(T, K);
+    if (this->isEmpty(T))
+      return this->CreateNode(NULL, V, NULL);
+
+    assert(!T->isMutable());
+
+    key_type_ref KCurrent = ImutInfo::KeyOfValue(this->Value(T));
+
+    if (ImutInfo::isLess(K, KCurrent))
+      return this->Balance(Add_internal(V, this->Left(T)), this->Value(T), this->Right(T));
+    else
+      return this->Balance(this->Left(T), this->Value(T), Add_internal(V, this->Right(T)));
+  }
+
+  // Remove all overlaps from T.
+  TreeTy *RemoveAllOverlaps(TreeTy *T, key_type_ref K) {
+    bool Changed;
+    do {
+      Changed = false;
+      T = RemoveOverlap(T, K, Changed);
+      this->MarkImmutable(T);
+    } while (Changed);
+
+    return T;
+  }
+
+  // Remove one overlap from T.
+  TreeTy *RemoveOverlap(TreeTy *T, key_type_ref K, bool &Changed) {
+    if (!T)
+      return NULL;
+    Interval CurrentK = ImutInfo::KeyOfValue(this->Value(T));
+
+    // If current key does not overlap the inserted key.
+    if (CurrentK.getStart() > K.getEnd())
+      return this->Balance(RemoveOverlap(this->Left(T), K, Changed), this->Value(T), this->Right(T));
+    else if (CurrentK.getEnd() < K.getStart())
+      return this->Balance(this->Left(T), this->Value(T), RemoveOverlap(this->Right(T), K, Changed));
+
+    // Current key overlaps with the inserted key.
+    // Remove the current key.
+    Changed = true;
+    data_type_ref OldData = ImutInfo::DataOfValue(this->Value(T));
+    T = this->Remove_internal(CurrentK, T);
+    // Add back the unoverlapped part of the current key.
+    if (CurrentK.getStart() < K.getStart()) {
+      if (CurrentK.getEnd() <= K.getEnd()) {
+        Interval NewK(CurrentK.getStart(), K.getStart()-1);
+        return Add_internal(std::make_pair(NewK, OldData), T);
+      } else {
+        Interval NewK1(CurrentK.getStart(), K.getStart()-1);
+        T = Add_internal(std::make_pair(NewK1, OldData), T); 
+
+        Interval NewK2(K.getEnd()+1, CurrentK.getEnd());
+        return Add_internal(std::make_pair(NewK2, OldData), T);
+      }
+    } else {
+      if (CurrentK.getEnd() > K.getEnd()) {
+        Interval NewK(K.getEnd()+1, CurrentK.getEnd());
+        return Add_internal(std::make_pair(NewK, OldData), T);
+      } else
+        return T;
+    }
+  }
+};
+
+/// ImmutableIntervalMap maps an interval [start, end] to a value. The intervals
+/// in the map are guaranteed to be disjoint.
+template <typename ValT>
+class ImmutableIntervalMap 
+  : public ImmutableMap<Interval, ValT, ImutIntervalInfo<ValT> > {
+
+  typedef typename ImutIntervalInfo<ValT>::value_type      value_type;
+  typedef typename ImutIntervalInfo<ValT>::value_type_ref  value_type_ref;
+  typedef typename ImutIntervalInfo<ValT>::key_type        key_type;
+  typedef typename ImutIntervalInfo<ValT>::key_type_ref    key_type_ref;
+  typedef typename ImutIntervalInfo<ValT>::data_type       data_type;
+  typedef typename ImutIntervalInfo<ValT>::data_type_ref   data_type_ref;
+  typedef ImutAVLTree<ImutIntervalInfo<ValT> > TreeTy;
+
+public:
+  explicit ImmutableIntervalMap(TreeTy *R) 
+    : ImmutableMap<Interval, ValT, ImutIntervalInfo<ValT> >(R) {}
+
+  class Factory {
+    ImutIntervalAVLFactory<ImutIntervalInfo<ValT> > F;
+
+  public:
+    Factory(BumpPtrAllocator& Alloc) : F(Alloc) {}
+
+    ImmutableIntervalMap GetEmptyMap() { 
+      return ImmutableIntervalMap(F.GetEmptyTree()); 
+    }
+
+    ImmutableIntervalMap Add(ImmutableIntervalMap Old, 
+                             key_type_ref K, data_type_ref D) {
+      TreeTy *T = F.Add(Old.Root, std::make_pair<key_type, data_type>(K, D));
+      return ImmutableIntervalMap(F.GetCanonicalTree(T));
+    }
+
+    ImmutableIntervalMap Remove(ImmutableIntervalMap Old, key_type_ref K) {
+      TreeTy *T = F.Remove(Old.Root, K);
+      return ImmutableIntervalMap(F.GetCanonicalTree(T));
+    }
+
+    data_type *Lookup(ImmutableIntervalMap M, key_type_ref K) {
+      TreeTy *T = F.Find(M.getRoot(), K);
+      if (T)
+        return &T->getValue().second;
+      else
+        return 0;
+    }
+  };
+
+private:
+  // For ImmutableIntervalMap, the lookup operation has to be done by the 
+  // factory.
+  data_type* lookup(key_type_ref K) const;
+};
+
+} // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
index 1b3f1a9..8af128e 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableMap.h
@@ -68,7 +68,7 @@ public:
   typedef typename ValInfo::data_type_ref   data_type_ref;
   typedef ImutAVLTree<ValInfo>              TreeTy;
 
-private:
+protected:
   TreeTy* Root;
 
 public:
@@ -106,13 +106,10 @@ public:
     void operator=(const Factory& RHS); // DO NOT IMPLEMENT
   };
 
-  friend class Factory;
-
   bool contains(key_type_ref K) const {
     return Root ? Root->contains(K) : false;
   }
 
-
   bool operator==(ImmutableMap RHS) const {
     return Root && RHS.Root ? Root->isEqual(*RHS.Root) : Root == RHS.Root;
   }
diff --git a/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h b/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
index ac06a40..65e70e2 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/ImmutableSet.h
@@ -27,6 +27,7 @@ namespace llvm {
 //===----------------------------------------------------------------------===//
 
 template <typename ImutInfo> class ImutAVLFactory;
+template <typename ImutInfo> class ImutIntervalAVLFactory;
 template <typename ImutInfo> class ImutAVLTreeInOrderIterator;
 template <typename ImutInfo> class ImutAVLTreeGenericIterator;
 
@@ -39,6 +40,7 @@ public:
 
   typedef ImutAVLFactory<ImutInfo>          Factory;
   friend class ImutAVLFactory<ImutInfo>;
+  friend class ImutIntervalAVLFactory<ImutInfo>;
 
   friend class ImutAVLTreeGenericIterator<ImutInfo>;
   friend class FoldingSet<ImutAVLTree>;
@@ -389,7 +391,7 @@ public:
   // These have succinct names so that the balancing code
   // is as terse (and readable) as possible.
   //===--------------------------------------------------===//
-private:
+protected:
 
   bool           isEmpty(TreeTy* T) const { return !T; }
   unsigned Height(TreeTy* T) const { return T ? T->getHeight() : 0; }
@@ -581,25 +583,14 @@ public:
         continue;
       
       // We found a collision.  Perform a comparison of Contents('T')
-      // with Contents('L')+'V'+Contents('R').
+      // with Contents('TNew')
       typename TreeTy::iterator TI = T->begin(), TE = T->end();
       
-      // First compare Contents('L') with the (initial) contents of T.
-      if (!CompareTreeWithSection(TNew->getLeft(), TI, TE))
-        continue;
-      
-      // Now compare the new data element.
-      if (TI == TE || !TI->ElementEqual(TNew->getValue()))
-        continue;
-      
-      ++TI;
-      
-      // Now compare the remainder of 'T' with 'R'.
-      if (!CompareTreeWithSection(TNew->getRight(), TI, TE))
+      if (!CompareTreeWithSection(TNew, TI, TE))
         continue;
       
       if (TI != TE)
-        continue; // Contents('R') did not match suffix of 'T'.
+        continue; // T has more contents than TNew.
       
       // Trees did match!  Return 'T'.
       return T;
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallBitVector.h b/libclamav/c++/llvm/include/llvm/ADT/SmallBitVector.h
index 346fb1c..5c774b9 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallBitVector.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallBitVector.h
@@ -310,11 +310,47 @@ public:
   }
 
   // Intersection, union, disjoint union.
-  BitVector &operator&=(const SmallBitVector &RHS); // TODO: implement
+  SmallBitVector &operator&=(const SmallBitVector &RHS) {
+    resize(std::max(size(), RHS.size()));
+    if (isSmall())
+      setSmallBits(getSmallBits() & RHS.getSmallBits());
+    else if (!RHS.isSmall())
+      X.getPointer()->operator&=(*RHS.X.getPointer());
+    else {
+      SmallBitVector Copy = RHS;
+      Copy.resize(size());
+      X.getPointer()->operator&=(*Copy.X.getPointer());
+    }
+    return *this;
+  }
 
-  BitVector &operator|=(const SmallBitVector &RHS); // TODO: implement
+  SmallBitVector &operator|=(const SmallBitVector &RHS) {
+    resize(std::max(size(), RHS.size()));
+    if (isSmall())
+      setSmallBits(getSmallBits() | RHS.getSmallBits());
+    else if (!RHS.isSmall())
+      X.getPointer()->operator|=(*RHS.X.getPointer());
+    else {
+      SmallBitVector Copy = RHS;
+      Copy.resize(size());
+      X.getPointer()->operator|=(*Copy.X.getPointer());
+    }
+    return *this;
+  }
 
-  BitVector &operator^=(const SmallBitVector &RHS); // TODO: implement
+  SmallBitVector &operator^=(const SmallBitVector &RHS) {
+    resize(std::max(size(), RHS.size()));
+    if (isSmall())
+      setSmallBits(getSmallBits() ^ RHS.getSmallBits());
+    else if (!RHS.isSmall())
+      X.getPointer()->operator^=(*RHS.X.getPointer());
+    else {
+      SmallBitVector Copy = RHS;
+      Copy.resize(size());
+      X.getPointer()->operator^=(*Copy.X.getPointer());
+    }
+    return *this;
+  }
 
   // Assignment operator.
   const SmallBitVector &operator=(const SmallBitVector &RHS) {
diff --git a/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h b/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
index c29fc9f..ef08125 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/SmallPtrSet.h
@@ -225,7 +225,7 @@ struct NextPowerOfTwo {
 };
   
 
-/// SmallPtrSet - This class implements a set which is optimizer for holding
+/// SmallPtrSet - This class implements a set which is optimized for holding
 /// SmallSize or less elements.  This internally rounds up SmallSize to the next
 /// power of two if it is not already a power of two.  See the comments above
 /// SmallPtrSetImpl for details of the algorithm.
diff --git a/libclamav/c++/llvm/include/llvm/ADT/Triple.h b/libclamav/c++/llvm/include/llvm/ADT/Triple.h
index fe39324..8798b0e 100644
--- a/libclamav/c++/llvm/include/llvm/ADT/Triple.h
+++ b/libclamav/c++/llvm/include/llvm/ADT/Triple.h
@@ -66,6 +66,7 @@ public:
     ppc,     // PPC: powerpc
     ppc64,   // PPC64: powerpc64, ppu
     sparc,   // Sparc: sparc
+    sparcv9, // Sparcv9: Sparcv9
     systemz, // SystemZ: s390x
     tce,     // TCE (http://tce.cs.tut.fi/): tce
     thumb,   // Thumb: thumb, thumbv.*
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h b/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
index 06951c7..e2675eb 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ConstantFolding.h
@@ -37,7 +37,7 @@ Constant *ConstantFoldInstruction(Instruction *I, const TargetData *TD = 0);
 /// ConstantFoldConstantExpression - Attempt to fold the constant expression
 /// using the specified TargetData.  If successful, the constant result is
 /// result is returned, if not, null is returned.
-Constant *ConstantFoldConstantExpression(ConstantExpr *CE,
+Constant *ConstantFoldConstantExpression(const ConstantExpr *CE,
                                          const TargetData *TD = 0);
 
 /// ConstantFoldInstOperands - Attempt to constant fold an instruction with the
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
index 150d3ee..ccf0105 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/DebugInfo.h
@@ -193,7 +193,9 @@ namespace llvm {
       FlagFwdDecl          = 1 << 2,
       FlagAppleBlock       = 1 << 3,
       FlagBlockByrefStruct = 1 << 4,
-      FlagVirtual          = 1 << 5
+      FlagVirtual          = 1 << 5,
+      FlagArtificial       = 1 << 6  // To identify artificial arguments in
+                                     // a subroutine type. e.g. "this" in c++.
     };
 
   protected:
@@ -241,6 +243,9 @@ namespace llvm {
     bool isVirtual() const {
       return (getFlags() & FlagVirtual) != 0;
     }
+    bool isArtificial() const {
+      return (getFlags() & FlagArtificial) != 0;
+    }
 
     /// dump - print type.
     void dump() const;
@@ -298,6 +303,9 @@ namespace llvm {
 
     DIArray getTypeArray() const { return getFieldAs<DIArray>(10); }
     unsigned getRunTimeLang() const { return getUnsignedField(11); }
+    DICompositeType getContainingType() const {
+      return getFieldAs<DICompositeType>(12);
+    }
 
     /// Verify - Verify that a composite type descriptor is well formed.
     bool Verify() const;
@@ -372,6 +380,7 @@ namespace llvm {
     DICompositeType getContainingType() const {
       return getFieldAs<DICompositeType>(13);
     }
+    unsigned isArtificial() const    { return getUnsignedField(14); }
 
     StringRef getFilename() const    { return getCompileUnit().getFilename();}
     StringRef getDirectory() const   { return getCompileUnit().getDirectory();}
@@ -567,7 +576,11 @@ namespace llvm {
                                         uint64_t OffsetInBits, unsigned Flags,
                                         DIType DerivedFrom,
                                         DIArray Elements,
-                                        unsigned RunTimeLang = 0);
+                                        unsigned RunTimeLang = 0,
+                                        MDNode *ContainingType = 0);
+
+    /// CreateArtificialType - Create a new DIType with "artificial" flag set.
+    DIType CreateArtificialType(DIType Ty);
 
     /// CreateCompositeType - Create a composite type like array, struct, etc.
     DICompositeType CreateCompositeTypeEx(unsigned Tag, DIDescriptor Context,
@@ -591,7 +604,8 @@ namespace llvm {
                                   bool isDefinition,
                                   unsigned VK = 0,
                                   unsigned VIndex = 0,
-                                  DIType = DIType());
+                                  DIType = DIType(),
+                                  bool isArtificial = 0);
 
     /// CreateSubprogramDefinition - Create new subprogram descriptor for the
     /// given declaration. 
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
index 50f7d45..e6e9c71 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/IVUsers.h
@@ -16,29 +16,27 @@
 #define LLVM_ANALYSIS_IVUSERS_H
 
 #include "llvm/Analysis/LoopPass.h"
-#include "llvm/Analysis/ScalarEvolution.h"
-#include "llvm/ADT/SmallVector.h"
-#include <map>
+#include "llvm/Support/ValueHandle.h"
 
 namespace llvm {
 
 class DominatorTree;
 class Instruction;
 class Value;
-struct IVUsersOfOneStride;
-
-/// IVStrideUse - Keep track of one use of a strided induction variable, where
-/// the stride is stored externally.  The Offset member keeps track of the
-/// offset from the IV, User is the actual user of the operand, and
-/// 'OperandValToReplace' is the operand of the User that is the use.
+class IVUsers;
+class ScalarEvolution;
+class SCEV;
+
+/// IVStrideUse - Keep track of one use of a strided induction variable.
+/// The Expr member keeps track of the expression, User is the actual user
+/// instruction of the operand, and 'OperandValToReplace' is the operand of
+/// the User that is the use.
 class IVStrideUse : public CallbackVH, public ilist_node<IVStrideUse> {
 public:
-  IVStrideUse(IVUsersOfOneStride *parent,
-              const SCEV *offset,
+  IVStrideUse(IVUsers *P, const SCEV *S, const SCEV *Off,
               Instruction* U, Value *O)
-    : CallbackVH(U), Parent(parent), Offset(offset),
-      OperandValToReplace(O),
-      IsUseOfPostIncrementedValue(false) {
+    : CallbackVH(U), Parent(P), Stride(S), Offset(Off),
+      OperandValToReplace(O), IsUseOfPostIncrementedValue(false) {
   }
 
   /// getUser - Return the user instruction for this use.
@@ -51,9 +49,17 @@ public:
     setValPtr(NewUser);
   }
 
-  /// getParent - Return a pointer to the IVUsersOfOneStride that owns
+  /// getParent - Return a pointer to the IVUsers that owns
   /// this IVStrideUse.
-  IVUsersOfOneStride *getParent() const { return Parent; }
+  IVUsers *getParent() const { return Parent; }
+
+  /// getStride - Return the expression for the stride for the use.
+  const SCEV *getStride() const { return Stride; }
+
+  /// setStride - Assign a new stride to this use.
+  void setStride(const SCEV *Val) {
+    Stride = Val;
+  }
 
   /// getOffset - Return the offset to add to a theoeretical induction
   /// variable that starts at zero and counts up by the stride to compute
@@ -92,8 +98,11 @@ public:
   }
 
 private:
-  /// Parent - a pointer to the IVUsersOfOneStride that owns this IVStrideUse.
-  IVUsersOfOneStride *Parent;
+  /// Parent - a pointer to the IVUsers that owns this IVStrideUse.
+  IVUsers *Parent;
+
+  /// Stride - The stride for this use.
+  const SCEV *Stride;
 
   /// Offset - The offset to add to the base induction expression.
   const SCEV *Offset;
@@ -138,37 +147,8 @@ private:
   mutable ilist_node<IVStrideUse> Sentinel;
 };
 
-/// IVUsersOfOneStride - This structure keeps track of all instructions that
-/// have an operand that is based on the trip count multiplied by some stride.
-struct IVUsersOfOneStride : public ilist_node<IVUsersOfOneStride> {
-private:
-  IVUsersOfOneStride(const IVUsersOfOneStride &I); // do not implement
-  void operator=(const IVUsersOfOneStride &I);     // do not implement
-
-public:
-  IVUsersOfOneStride() : Stride(0) {}
-
-  explicit IVUsersOfOneStride(const SCEV *stride) : Stride(stride) {}
-
-  /// Stride - The stride for all the contained IVStrideUses. This is
-  /// a constant for affine strides.
-  const SCEV *Stride;
-
-  /// Users - Keep track of all of the users of this stride as well as the
-  /// initial value and the operand that uses the IV.
-  ilist<IVStrideUse> Users;
-
-  void addUser(const SCEV *Offset, Instruction *User, Value *Operand) {
-    Users.push_back(new IVStrideUse(this, Offset, User, Operand));
-  }
-
-  void removeUser(IVStrideUse *User) {
-    Users.erase(User);
-  }
-};
-
 class IVUsers : public LoopPass {
-  friend class IVStrideUserVH;
+  friend class IVStrideUse;
   Loop *L;
   LoopInfo *LI;
   DominatorTree *DT;
@@ -177,19 +157,8 @@ class IVUsers : public LoopPass {
 
   /// IVUses - A list of all tracked IV uses of induction variable expressions
   /// we are interested in.
-  ilist<IVUsersOfOneStride> IVUses;
-
-public:
-  /// IVUsesByStride - A mapping from the strides in StrideOrder to the
-  /// uses in IVUses.
-  std::map<const SCEV *, IVUsersOfOneStride*> IVUsesByStride;
+  ilist<IVStrideUse> IVUses;
 
-  /// StrideOrder - An ordering of the keys in IVUsesByStride that is stable:
-  /// We use this to iterate over the IVUsesByStride collection without being
-  /// dependent on random ordering of pointers in the process.
-  SmallVector<const SCEV *, 16> StrideOrder;
-
-private:
   virtual void getAnalysisUsage(AnalysisUsage &AU) const;
 
   virtual bool runOnLoop(Loop *L, LPPassManager &LPM);
@@ -205,8 +174,8 @@ public:
   /// return true.  Otherwise, return false.
   bool AddUsersIfInteresting(Instruction *I);
 
-  void AddUser(const SCEV *Stride, const SCEV *Offset,
-               Instruction *User, Value *Operand);
+  IVStrideUse &AddUser(const SCEV *Stride, const SCEV *Offset,
+                       Instruction *User, Value *Operand);
 
   /// getReplacementExpr - Return a SCEV expression which computes the
   /// value of the OperandValToReplace of the given IVStrideUse.
@@ -217,6 +186,14 @@ public:
   /// isUseOfPostIncrementedValue flag.
   const SCEV *getCanonicalExpr(const IVStrideUse &U) const;
 
+  typedef ilist<IVStrideUse>::iterator iterator;
+  typedef ilist<IVStrideUse>::const_iterator const_iterator;
+  iterator begin() { return IVUses.begin(); }
+  iterator end()   { return IVUses.end(); }
+  const_iterator begin() const { return IVUses.begin(); }
+  const_iterator end() const   { return IVUses.end(); }
+  bool empty() const { return IVUses.empty(); }
+
   void print(raw_ostream &OS, const Module* = 0) const;
 
   /// dump - This method is used for debugging.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h b/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h
index 7ce49d7..84acd7d 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/InlineCost.h
@@ -34,7 +34,7 @@ namespace llvm {
     /// NeverInline - True if this callee should never be inlined into a
     /// caller.
     bool NeverInline;
-    
+
     /// usesDynamicAlloca - True if this function calls alloca (in the C sense).
     bool usesDynamicAlloca;
 
@@ -42,17 +42,20 @@ namespace llvm {
     /// is used to estimate the code size cost of inlining it.
     unsigned NumInsts, NumBlocks;
 
+    /// NumCalls - Keep track of the number of calls to 'big' functions.
+    unsigned NumCalls;
+
     /// NumVectorInsts - Keep track of how many instructions produce vector
     /// values.  The inliner is being more aggressive with inlining vector
     /// kernels.
     unsigned NumVectorInsts;
-    
+
     /// NumRets - Keep track of how many Ret instructions the block contains.
     unsigned NumRets;
 
     CodeMetrics() : NeverInline(false), usesDynamicAlloca(false), NumInsts(0),
-                    NumBlocks(0), NumVectorInsts(0), NumRets(0) {}
-    
+                    NumBlocks(0), NumCalls(0), NumVectorInsts(0), NumRets(0) {}
+
     /// analyzeBasicBlock - Add information about the specified basic block
     /// to the current structure.
     void analyzeBasicBlock(const BasicBlock *BB);
@@ -64,7 +67,9 @@ namespace llvm {
 
   namespace InlineConstants {
     // Various magic constants used to adjust heuristics.
-    const int CallPenalty = 5;
+    const int InstrCost = 5;
+    const int IndirectCallBonus = 500;
+    const int CallPenalty = 25;
     const int LastCallToStaticBonus = -15000;
     const int ColdccPenalty = 2000;
     const int NoreturnPenalty = 10000;
@@ -119,18 +124,18 @@ namespace llvm {
       return getCost();
     }
   };
-  
+
   /// InlineCostAnalyzer - Cost analyzer used by inliner.
   class InlineCostAnalyzer {
     struct ArgInfo {
     public:
       unsigned ConstantWeight;
       unsigned AllocaWeight;
-      
+
       ArgInfo(unsigned CWeight, unsigned AWeight)
         : ConstantWeight(CWeight), AllocaWeight(AWeight) {}
     };
-    
+
     struct FunctionInfo {
       CodeMetrics Metrics;
 
@@ -139,12 +144,12 @@ namespace llvm {
       /// would reduce the code size.  If so, we add some value to the argument
       /// entry here.
       std::vector<ArgInfo> ArgumentWeights;
-    
+
       /// CountCodeReductionForConstant - Figure out an approximation for how
       /// many instructions will be constant folded if the specified value is
       /// constant.
       unsigned CountCodeReductionForConstant(Value *V);
-    
+
       /// CountCodeReductionForAlloca - Figure out an approximation of how much
       /// smaller the function will be if it is inlined into a context where an
       /// argument becomes an alloca.
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
index 33bf0b0..d5e4d51 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/LoopInfo.h
@@ -553,6 +553,10 @@ public:
   /// normal unsigned value, if possible. Returns 0 if the trip count is unknown
   /// of not constant. Will also return 0 if the trip count is very large
   /// (>= 2^32)
+  ///
+  /// The IndVarSimplify pass transforms loops to have a form that this
+  /// function easily understands.
+  ///
   unsigned getSmallConstantTripCount() const;
 
   /// getSmallConstantTripMultiple - Returns the largest constant divisor of the
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h b/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h
index f6fa0c8..a7f42c9 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/MemoryBuiltins.h
@@ -72,7 +72,7 @@ Value *getMallocArraySize(CallInst *CI, const TargetData *TD,
 //  free Call Utility Functions.
 //
 
-/// isFreeCall - Returns true if the the value is a call to the builtin free()
+/// isFreeCall - Returns true if the value is a call to the builtin free()
 bool isFreeCall(const Value *I);
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
index e281971..383ee88 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolution.h
@@ -452,11 +452,25 @@ namespace llvm {
     const SCEV *getUMaxExpr(SmallVectorImpl<const SCEV *> &Operands);
     const SCEV *getSMinExpr(const SCEV *LHS, const SCEV *RHS);
     const SCEV *getUMinExpr(const SCEV *LHS, const SCEV *RHS);
-    const SCEV *getFieldOffsetExpr(const StructType *STy, unsigned FieldNo);
-    const SCEV *getAllocSizeExpr(const Type *AllocTy);
     const SCEV *getUnknown(Value *V);
     const SCEV *getCouldNotCompute();
 
+    /// getSizeOfExpr - Return an expression for sizeof on the given type.
+    ///
+    const SCEV *getSizeOfExpr(const Type *AllocTy);
+
+    /// getAlignOfExpr - Return an expression for alignof on the given type.
+    ///
+    const SCEV *getAlignOfExpr(const Type *AllocTy);
+
+    /// getOffsetOfExpr - Return an expression for offsetof on the given field.
+    ///
+    const SCEV *getOffsetOfExpr(const StructType *STy, unsigned FieldNo);
+
+    /// getOffsetOfExpr - Return an expression for offsetof on the given field.
+    ///
+    const SCEV *getOffsetOfExpr(const Type *CTy, Constant *FieldNo);
+
     /// getNegativeSCEV - Return the SCEV object corresponding to -V.
     ///
     const SCEV *getNegativeSCEV(const SCEV *V);
@@ -503,7 +517,7 @@ namespace llvm {
 
     /// getIntegerSCEV - Given a SCEVable type, create a constant for the
     /// specified signed integer value and return a SCEV for the constant.
-    const SCEV *getIntegerSCEV(int Val, const Type *Ty);
+    const SCEV *getIntegerSCEV(int64_t Val, const Type *Ty);
 
     /// getUMaxFromMismatchedTypes - Promote the operands to the wider of
     /// the types using zero-extension, and then perform a umax operation
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
index 01df503..26dc0c4 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpander.h
@@ -27,10 +27,7 @@ namespace llvm {
   /// and destroy it when finished to allow the release of the associated
   /// memory.
   class SCEVExpander : public SCEVVisitor<SCEVExpander, Value*> {
-  public:
     ScalarEvolution &SE;
-
-  private:
     std::map<std::pair<const SCEV *, Instruction *>, AssertingVH<Value> >
       InsertedExpressions;
     std::set<Value*> InsertedValues;
@@ -57,11 +54,11 @@ namespace llvm {
     /// in a more literal form.
     bool CanonicalMode;
 
-  protected:
     typedef IRBuilder<true, TargetFolder> BuilderType;
     BuilderType Builder;
 
     friend struct SCEVVisitor<SCEVExpander, Value*>;
+
   public:
     /// SCEVExpander - Construct a SCEVExpander in "canonical" mode.
     explicit SCEVExpander(ScalarEvolution &se)
@@ -167,17 +164,13 @@ namespace llvm {
 
     Value *visitUMaxExpr(const SCEVUMaxExpr *S);
 
-    Value *visitFieldOffsetExpr(const SCEVFieldOffsetExpr *S);
-
-    Value *visitAllocSizeExpr(const SCEVAllocSizeExpr *S);
-
     Value *visitUnknown(const SCEVUnknown *S) {
       return S->getValue();
     }
 
-    void rememberInstruction(Value *I) {
-      if (!PostIncLoop) InsertedValues.insert(I);
-    }
+    void rememberInstruction(Value *I);
+
+    void restoreInsertPoint(BasicBlock *BB, BasicBlock::iterator I);
 
     Value *expandAddRecExprLiterally(const SCEVAddRecExpr *);
     PHINode *getAddRecExprPHILiterally(const SCEVAddRecExpr *Normalized,
diff --git a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
index 64b8b0b..0ab3b3f 100644
--- a/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
+++ b/libclamav/c++/llvm/include/llvm/Analysis/ScalarEvolutionExpressions.h
@@ -27,7 +27,7 @@ namespace llvm {
     // folders simpler.
     scConstant, scTruncate, scZeroExtend, scSignExtend, scAddExpr, scMulExpr,
     scUDivExpr, scAddRecExpr, scUMaxExpr, scSMaxExpr,
-    scFieldOffset, scAllocSize, scUnknown, scCouldNotCompute
+    scUnknown, scCouldNotCompute
   };
 
   //===--------------------------------------------------------------------===//
@@ -412,12 +412,15 @@ namespace llvm {
     }
 
     virtual bool hasComputableLoopEvolution(const Loop *QL) const {
-      if (L == QL) return true;
-      return false;
+      return L == QL;
     }
 
     virtual bool isLoopInvariant(const Loop *QueryLoop) const;
 
+    bool dominates(BasicBlock *BB, DominatorTree *DT) const;
+
+    bool properlyDominates(BasicBlock *BB, DominatorTree *DT) const;
+
     /// isAffine - Return true if this is an affine AddRec (i.e., it represents
     /// an expressions A+B*x where A and B are loop invariant values.
     bool isAffine() const {
@@ -512,95 +515,6 @@ namespace llvm {
   };
 
   //===--------------------------------------------------------------------===//
-  /// SCEVTargetDataConstant - This node is the base class for representing
-  /// target-dependent values in a target-independent way.
-  ///
-  class SCEVTargetDataConstant : public SCEV {
-  protected:
-    const Type *Ty;
-    SCEVTargetDataConstant(const FoldingSetNodeID &ID, enum SCEVTypes T,
-                           const Type *ty) :
-      SCEV(ID, T), Ty(ty) {}
-
-  public:
-    virtual bool isLoopInvariant(const Loop *) const { return true; }
-    virtual bool hasComputableLoopEvolution(const Loop *) const {
-      return false; // not computable
-    }
-
-    virtual bool hasOperand(const SCEV *) const {
-      return false;
-    }
-
-    bool dominates(BasicBlock *, DominatorTree *) const {
-      return true;
-    }
-
-    bool properlyDominates(BasicBlock *, DominatorTree *) const {
-      return true;
-    }
-
-    virtual const Type *getType() const { return Ty; }
-
-    /// Methods for support type inquiry through isa, cast, and dyn_cast:
-    static inline bool classof(const SCEVTargetDataConstant *S) { return true; }
-    static inline bool classof(const SCEV *S) {
-      return S->getSCEVType() == scFieldOffset ||
-             S->getSCEVType() == scAllocSize;
-    }
-  };
-
-  //===--------------------------------------------------------------------===//
-  /// SCEVFieldOffsetExpr - This node represents an offsetof expression.
-  ///
-  class SCEVFieldOffsetExpr : public SCEVTargetDataConstant {
-    friend class ScalarEvolution;
-
-    const StructType *STy;
-    unsigned FieldNo;
-    SCEVFieldOffsetExpr(const FoldingSetNodeID &ID, const Type *ty,
-                        const StructType *sty, unsigned fieldno) :
-      SCEVTargetDataConstant(ID, scFieldOffset, ty),
-      STy(sty), FieldNo(fieldno) {}
-
-  public:
-    const StructType *getStructType() const { return STy; }
-    unsigned getFieldNo() const { return FieldNo; }
-
-    virtual void print(raw_ostream &OS) const;
-
-    /// Methods for support type inquiry through isa, cast, and dyn_cast:
-    static inline bool classof(const SCEVFieldOffsetExpr *S) { return true; }
-    static inline bool classof(const SCEV *S) {
-      return S->getSCEVType() == scFieldOffset;
-    }
-  };
-
-  //===--------------------------------------------------------------------===//
-  /// SCEVAllocSize - This node represents a sizeof expression.
-  ///
-  class SCEVAllocSizeExpr : public SCEVTargetDataConstant {
-    friend class ScalarEvolution;
-
-    const Type *AllocTy;
-    SCEVAllocSizeExpr(const FoldingSetNodeID &ID,
-                      const Type *ty, const Type *allocty) :
-      SCEVTargetDataConstant(ID, scAllocSize, ty),
-      AllocTy(allocty) {}
-
-  public:
-    const Type *getAllocType() const { return AllocTy; }
-
-    virtual void print(raw_ostream &OS) const;
-
-    /// Methods for support type inquiry through isa, cast, and dyn_cast:
-    static inline bool classof(const SCEVAllocSizeExpr *S) { return true; }
-    static inline bool classof(const SCEV *S) {
-      return S->getSCEVType() == scAllocSize;
-    }
-  };
-
-  //===--------------------------------------------------------------------===//
   /// SCEVUnknown - This means that we are dealing with an entirely unknown SCEV
   /// value, and only represent it as its LLVM Value.  This is the "bottom"
   /// value for the analysis.
@@ -615,6 +529,16 @@ namespace llvm {
   public:
     Value *getValue() const { return V; }
 
+    /// isSizeOf, isAlignOf, isOffsetOf - Test whether this is a special
+    /// constant representing a type size, alignment, or field offset in
+    /// a target-independent manner, and hasn't happened to have been
+    /// folded with other operations into something unrecognizable. This
+    /// is mainly only useful for pretty-printing and other situations
+    /// where it isn't absolutely required for these to succeed.
+    bool isSizeOf(const Type *&AllocTy) const;
+    bool isAlignOf(const Type *&AllocTy) const;
+    bool isOffsetOf(const Type *&STy, Constant *&FieldNo) const;
+
     virtual bool isLoopInvariant(const Loop *L) const;
     virtual bool hasComputableLoopEvolution(const Loop *QL) const {
       return false; // not computable
@@ -665,10 +589,6 @@ namespace llvm {
         return ((SC*)this)->visitSMaxExpr((const SCEVSMaxExpr*)S);
       case scUMaxExpr:
         return ((SC*)this)->visitUMaxExpr((const SCEVUMaxExpr*)S);
-      case scFieldOffset:
-        return ((SC*)this)->visitFieldOffsetExpr((const SCEVFieldOffsetExpr*)S);
-      case scAllocSize:
-        return ((SC*)this)->visitAllocSizeExpr((const SCEVAllocSizeExpr*)S);
       case scUnknown:
         return ((SC*)this)->visitUnknown((const SCEVUnknown*)S);
       case scCouldNotCompute:
diff --git a/libclamav/c++/llvm/include/llvm/Assembly/AsmAnnotationWriter.h b/libclamav/c++/llvm/include/llvm/Assembly/AsmAnnotationWriter.h
index 6c3ddaf..6d75720 100644
--- a/libclamav/c++/llvm/include/llvm/Assembly/AsmAnnotationWriter.h
+++ b/libclamav/c++/llvm/include/llvm/Assembly/AsmAnnotationWriter.h
@@ -23,29 +23,34 @@ class Function;
 class BasicBlock;
 class Instruction;
 class raw_ostream;
+class formatted_raw_ostream;
 
 class AssemblyAnnotationWriter {
 public:
 
   virtual ~AssemblyAnnotationWriter();
 
-  // emitFunctionAnnot - This may be implemented to emit a string right before
-  // the start of a function.
+  /// emitFunctionAnnot - This may be implemented to emit a string right before
+  /// the start of a function.
   virtual void emitFunctionAnnot(const Function *F, raw_ostream &OS) {}
 
-  // emitBasicBlockStartAnnot - This may be implemented to emit a string right
-  // after the basic block label, but before the first instruction in the block.
+  /// emitBasicBlockStartAnnot - This may be implemented to emit a string right
+  /// after the basic block label, but before the first instruction in the block.
   virtual void emitBasicBlockStartAnnot(const BasicBlock *BB, raw_ostream &OS){
   }
 
-  // emitBasicBlockEndAnnot - This may be implemented to emit a string right
-  // after the basic block.
+  /// emitBasicBlockEndAnnot - This may be implemented to emit a string right
+  /// after the basic block.
   virtual void emitBasicBlockEndAnnot(const BasicBlock *BB, raw_ostream &OS){
   }
 
-  // emitInstructionAnnot - This may be implemented to emit a string right
-  // before an instruction is emitted.
+  /// emitInstructionAnnot - This may be implemented to emit a string right
+  /// before an instruction is emitted.
   virtual void emitInstructionAnnot(const Instruction *I, raw_ostream &OS) {}
+
+  /// printInfoComment - This may be implemented to emit a comment to the
+  /// right of an instruction or global value.
+  virtual void printInfoComment(const Value &V, formatted_raw_ostream &OS) {}
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Attributes.h b/libclamav/c++/llvm/include/llvm/Attributes.h
index 7fa5d4a..126c290 100644
--- a/libclamav/c++/llvm/include/llvm/Attributes.h
+++ b/libclamav/c++/llvm/include/llvm/Attributes.h
@@ -58,6 +58,13 @@ const Attributes NoRedZone = 1<<22; /// disable redzone
 const Attributes NoImplicitFloat = 1<<23; /// disable implicit floating point
                                           /// instructions.
 const Attributes Naked           = 1<<24; ///< Naked function
+const Attributes InlineHint      = 1<<25; ///< source said inlining was
+                                          ///desirable
+const Attributes StackAlignment  = 31<<26; ///< Alignment of stack for
+                                           ///function (5 bits) stored as log2
+                                           ///of alignment with +1 bias
+                                           ///0 means unaligned (different from
+                                           ///alignstack(1))
 
 /// @brief Attributes that only apply to function parameters.
 const Attributes ParameterOnly = ByVal | Nest | StructRet | NoCapture;
@@ -66,7 +73,7 @@ const Attributes ParameterOnly = ByVal | Nest | StructRet | NoCapture;
 /// be used on return values or function parameters.
 const Attributes FunctionOnly = NoReturn | NoUnwind | ReadNone | ReadOnly |
   NoInline | AlwaysInline | OptimizeForSize | StackProtect | StackProtectReq |
-  NoRedZone | NoImplicitFloat | Naked;
+  NoRedZone | NoImplicitFloat | Naked | InlineHint | StackAlignment;
 
 /// @brief Parameter attributes that do not apply to vararg call arguments.
 const Attributes VarArgsIncompatible = StructRet;
@@ -103,6 +110,28 @@ inline unsigned getAlignmentFromAttrs(Attributes A) {
   return 1U << ((Align >> 16) - 1);
 }
 
+/// This turns an int stack alignment (which must be a power of 2) into
+/// the form used internally in Attributes.
+inline Attributes constructStackAlignmentFromInt(unsigned i) {
+  // Default alignment, allow the target to define how to align it.
+  if (i == 0)
+    return 0;
+
+  assert(isPowerOf2_32(i) && "Alignment must be a power of two.");
+  assert(i <= 0x40000000 && "Alignment too large.");
+  return (Log2_32(i)+1) << 26;
+}
+
+/// This returns the stack alignment field of an attribute as a byte alignment
+/// value.
+inline unsigned getStackAlignmentFromAttrs(Attributes A) {
+  Attributes StackAlign = A & Attribute::StackAlignment;
+  if (StackAlign == 0)
+    return 0;
+
+  return 1U << ((StackAlign >> 26) - 1);
+}
+
 
 /// The set of Attributes set in Attributes is converted to a
 /// string of equivalent mnemonics. This is, presumably, for writing out
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/Archive.h b/libclamav/c++/llvm/include/llvm/Bitcode/Archive.h
index e19e4c0..67f2a4a 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/Archive.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/Archive.h
@@ -27,7 +27,6 @@ namespace llvm {
   class MemoryBuffer;
 
 // Forward declare classes
-class ModuleProvider;      // From VMCore
 class Module;              // From VMCore
 class Archive;             // Declared below
 class ArchiveMemberHeader; // Internal implementation class
@@ -374,14 +373,14 @@ class Archive {
     /// returns the associated module that defines that symbol. This method can
     /// be called as many times as necessary. This is handy for linking the
     /// archive into another module based on unresolved symbols. Note that the
-    /// ModuleProvider returned by this accessor should not be deleted by the
-    /// caller. It is managed internally by the Archive class. It is possible
-    /// that multiple calls to this accessor will return the same ModuleProvider
-    /// instance because the associated module defines multiple symbols.
-    /// @returns The ModuleProvider* found or null if the archive does not
-    /// contain a module that defines the \p symbol.
+    /// Module returned by this accessor should not be deleted by the caller. It
+    /// is managed internally by the Archive class. It is possible that multiple
+    /// calls to this accessor will return the same Module instance because the
+    /// associated module defines multiple symbols.
+    /// @returns The Module* found or null if the archive does not contain a
+    /// module that defines the \p symbol.
     /// @brief Look up a module by symbol name.
-    ModuleProvider* findModuleDefiningSymbol(
+    Module* findModuleDefiningSymbol(
       const std::string& symbol,  ///< Symbol to be sought
       std::string* ErrMessage     ///< Error message storage, if non-zero
     );
@@ -397,7 +396,7 @@ class Archive {
     /// @brief Look up multiple symbols in the archive.
     bool findModulesDefiningSymbols(
       std::set<std::string>& symbols,     ///< Symbols to be sought
-      std::set<ModuleProvider*>& modules, ///< The modules matching \p symbols
+      std::set<Module*>& modules,         ///< The modules matching \p symbols
       std::string* ErrMessage             ///< Error msg storage, if non-zero
     );
 
@@ -513,9 +512,9 @@ class Archive {
 
     /// This type is used to keep track of bitcode modules loaded from the
     /// symbol table. It maps the file offset to a pair that consists of the
-    /// associated ArchiveMember and the ModuleProvider.
+    /// associated ArchiveMember and the Module.
     /// @brief Module mapping type
-    typedef std::map<unsigned,std::pair<ModuleProvider*,ArchiveMember*> >
+    typedef std::map<unsigned,std::pair<Module*,ArchiveMember*> >
       ModuleMap;
 
 
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h b/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
index 2b1b85e..31d513c 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/BitstreamWriter.h
@@ -291,7 +291,7 @@ private:
   /// EmitRecordWithAbbrevImpl - This is the core implementation of the record
   /// emission code.  If BlobData is non-null, then it specifies an array of
   /// data that should be emitted as part of the Blob or Array operand that is
-  /// known to exist at the end of the the record.
+  /// known to exist at the end of the record.
   template<typename uintty>
   void EmitRecordWithAbbrevImpl(unsigned Abbrev, SmallVectorImpl<uintty> &Vals,
                                 StringRef Blob) {
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h b/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
index 9bb50d4..a980df8 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/LLVMBitCodes.h
@@ -94,7 +94,8 @@ namespace bitc {
     TYPE_CODE_FP128    = 14,   // LONG DOUBLE (112 bit mantissa)
     TYPE_CODE_PPC_FP128= 15,   // PPC LONG DOUBLE (2 doubles)
 
-    TYPE_CODE_METADATA = 16    // METADATA
+    TYPE_CODE_METADATA = 16,   // METADATA
+    TYPE_CODE_UNION    = 17    // UNION: [eltty x N]
   };
 
   // The type symbol table only has one code (TST_ENTRY_CODE).
diff --git a/libclamav/c++/llvm/include/llvm/Bitcode/ReaderWriter.h b/libclamav/c++/llvm/include/llvm/Bitcode/ReaderWriter.h
index 7b74bdf..45eb801 100644
--- a/libclamav/c++/llvm/include/llvm/Bitcode/ReaderWriter.h
+++ b/libclamav/c++/llvm/include/llvm/Bitcode/ReaderWriter.h
@@ -18,21 +18,20 @@
 
 namespace llvm {
   class Module;
-  class ModuleProvider;
   class MemoryBuffer;
   class ModulePass;
   class BitstreamWriter;
   class LLVMContext;
   class raw_ostream;
   
-  /// getBitcodeModuleProvider - Read the header of the specified bitcode buffer
+  /// getLazyBitcodeModule - Read the header of the specified bitcode buffer
   /// and prepare for lazy deserialization of function bodies.  If successful,
   /// this takes ownership of 'buffer' and returns a non-null pointer.  On
   /// error, this returns null, *does not* take ownership of Buffer, and fills
   /// in *ErrMsg with an error description if ErrMsg is non-null.
-  ModuleProvider *getBitcodeModuleProvider(MemoryBuffer *Buffer,
-                                           LLVMContext& Context,
-                                           std::string *ErrMsg = 0);
+  Module *getLazyBitcodeModule(MemoryBuffer *Buffer,
+                               LLVMContext& Context,
+                               std::string *ErrMsg = 0);
 
   /// ParseBitcodeFile - Read the specified bitcode file, returning the module.
   /// If an error occurs, this returns null and fills in *ErrMsg if it is
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h b/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
index 8607281..28a1a3e 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/AsmPrinter.h
@@ -80,13 +80,6 @@ namespace llvm {
     DwarfWriter *DW;
 
   public:
-    /// Flags to specify different kinds of comments to output in
-    /// assembly code.  These flags carry semantic information not
-    /// otherwise easily derivable from the IR text.
-    ///
-    enum CommentFlag {
-      ReloadReuse = 0x1
-    };
 
     /// Output stream on which we're printing assembly code.
     ///
@@ -149,7 +142,8 @@ namespace llvm {
 
   protected:
     explicit AsmPrinter(formatted_raw_ostream &o, TargetMachine &TM,
-                        const MCAsmInfo *T, bool V);
+                        MCContext &Ctx, MCStreamer &Streamer,
+                        const MCAsmInfo *T);
     
   public:
     virtual ~AsmPrinter();
@@ -207,21 +201,51 @@ namespace llvm {
                                        unsigned AsmVariant, 
                                        const char *ExtraCode);
     
+    /// runOnMachineFunction - Emit the specified function out to the
+    /// OutStreamer.
+    virtual bool runOnMachineFunction(MachineFunction &MF) {
+      SetupMachineFunction(MF);
+      EmitFunctionHeader();
+      EmitFunctionBody();
+      return false;
+    }      
+    
     /// SetupMachineFunction - This should be called when a new MachineFunction
     /// is being processed from runOnMachineFunction.
     void SetupMachineFunction(MachineFunction &MF);
     
+    /// EmitFunctionHeader - This method emits the header for the current
+    /// function.
+    void EmitFunctionHeader();
+    
+    /// EmitFunctionBody - This method emits the body and trailer for a
+    /// function.
+    void EmitFunctionBody();
+
+    /// EmitInstruction - Targets should implement this to emit instructions.
+    virtual void EmitInstruction(const MachineInstr *MI) {
+      assert(0 && "EmitInstruction not implemented");
+    }
+    
+    /// EmitFunctionBodyStart - Targets can override this to emit stuff before
+    /// the first basic block in the function.
+    virtual void EmitFunctionBodyStart() {}
+
+    /// EmitFunctionBodyEnd - Targets can override this to emit stuff after
+    /// the last basic block in the function.
+    virtual void EmitFunctionBodyEnd() {}
+    
     /// EmitConstantPool - Print to the current output stream assembly
     /// representations of the constants in the constant pool MCP. This is
     /// used to print out constants which have been "spilled to memory" by
     /// the code generator.
     ///
-    void EmitConstantPool(MachineConstantPool *MCP);
-
+    virtual void EmitConstantPool();
+    
     /// EmitJumpTableInfo - Print assembly representations of the jump tables 
     /// used by the current function to the current output stream.  
     ///
-    void EmitJumpTableInfo(MachineFunction &MF);
+    void EmitJumpTableInfo();
     
     /// EmitGlobalVariable - Emit the specified global variable to the .s file.
     virtual void EmitGlobalVariable(const GlobalVariable *GV);
@@ -276,19 +300,15 @@ namespace llvm {
 
     /// printLabel - This method prints a local label used by debug and
     /// exception handling tables.
-    void printLabel(const MachineInstr *MI) const;
     void printLabel(unsigned Id) const;
 
     /// printDeclare - This method prints a local variable declaration used by
     /// debug tables.
     void printDeclare(const MachineInstr *MI) const;
 
-    /// EmitComments - Pretty-print comments for instructions
-    void EmitComments(const MachineInstr &MI) const;
-
     /// GetGlobalValueSymbol - Return the MCSymbol for the specified global
     /// value.
-    MCSymbol *GetGlobalValueSymbol(const GlobalValue *GV) const;
+    virtual MCSymbol *GetGlobalValueSymbol(const GlobalValue *GV) const;
 
     /// GetSymbolWithGlobalValueBase - Return the MCSymbol for a symbol with
     /// global value name as its base, with the specified suffix, and where the
@@ -313,11 +333,9 @@ namespace llvm {
 
     /// GetBlockAddressSymbol - Return the MCSymbol used to satisfy BlockAddress
     /// uses of the specified basic block.
-    MCSymbol *GetBlockAddressSymbol(const BlockAddress *BA,
-                                    const char *Suffix = "") const;
+    MCSymbol *GetBlockAddressSymbol(const BlockAddress *BA) const;
     MCSymbol *GetBlockAddressSymbol(const Function *F,
-                                    const BasicBlock *BB,
-                                    const char *Suffix = "") const;
+                                    const BasicBlock *BB) const;
 
     /// EmitBasicBlockStart - This method prints the label for the specified
     /// MachineBasicBlock, an alignment (if present) and a comment describing
@@ -331,12 +349,21 @@ namespace llvm {
     void EmitGlobalConstant(const Constant* CV, unsigned AddrSpace = 0);
     
   protected:
+    virtual void EmitFunctionEntryLabel();
+    
     virtual void EmitMachineConstantPoolValue(MachineConstantPoolValue *MCPV);
 
+    /// printOffset - This is just convenient handler for printing offsets.
+    void printOffset(int64_t Offset) const;
+
+  private:
+
     /// processDebugLoc - Processes the debug information of each machine
     /// instruction's DebugLoc. 
     void processDebugLoc(const MachineInstr *MI, bool BeforePrintingInsn);
     
+    void printLabelInst(const MachineInstr *MI) const;
+
     /// printInlineAsm - This method formats and prints the specified machine
     /// instruction that is an inline asm.
     void printInlineAsm(const MachineInstr *MI) const;
@@ -348,14 +375,12 @@ namespace llvm {
     /// printKill - This method prints the specified kill machine instruction.
     void printKill(const MachineInstr *MI) const;
 
-    /// printVisibility - This prints visibility information about symbol, if
+    /// EmitVisibility - This emits visibility information about symbol, if
     /// this is suported by the target.
-    void printVisibility(MCSymbol *Sym, unsigned Visibility) const;
+    void EmitVisibility(MCSymbol *Sym, unsigned Visibility) const;
+    
+    void EmitLinkage(unsigned Linkage, MCSymbol *GVSym) const;
     
-    /// printOffset - This is just convenient handler for printing offsets.
-    void printOffset(int64_t Offset) const;
- 
-  private:
     void EmitJumpTableEntry(const MachineJumpTableInfo *MJTI,
                             const MachineBasicBlock *MBB,
                             unsigned uid) const;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
index 4d50879..f9490a7 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DAGISelHeader.h
@@ -132,4 +132,268 @@ void SelectRoot(SelectionDAG &DAG) {
   CurDAG->setRoot(Dummy.getValue());
 }
 
+
+/// CheckInteger - Return true if the specified node is not a ConstantSDNode or
+/// if it doesn't have the specified value.
+static bool CheckInteger(SDValue V, int64_t Val) {
+  ConstantSDNode *C = dyn_cast<ConstantSDNode>(V);
+  return C == 0 || C->getSExtValue() != Val;
+}
+
+/// CheckAndImmediate - Check to see if the specified node is an and with an
+/// immediate returning true on failure.
+///
+/// FIXME: Inline this gunk into CheckAndMask.
+bool CheckAndImmediate(SDValue V, int64_t Val) {
+  if (V->getOpcode() == ISD::AND)
+    if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(V->getOperand(1)))
+      if (CheckAndMask(V.getOperand(0), C, Val))
+        return false;
+  return true;
+}
+
+/// CheckOrImmediate - Check to see if the specified node is an or with an
+/// immediate returning true on failure.
+///
+/// FIXME: Inline this gunk into CheckOrMask.
+bool CheckOrImmediate(SDValue V, int64_t Val) {
+  if (V->getOpcode() == ISD::OR)
+    if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(V->getOperand(1)))
+      if (CheckOrMask(V.getOperand(0), C, Val))
+        return false;
+  return true;
+}
+
+static int8_t GetInt1(const unsigned char *MatcherTable, unsigned &Idx) {
+  return MatcherTable[Idx++];
+}
+
+static int16_t GetInt2(const unsigned char *MatcherTable, unsigned &Idx) {
+  int16_t Val = GetInt1(MatcherTable, Idx);
+  Val |= int16_t(GetInt1(MatcherTable, Idx)) << 8;
+  return Val;
+}
+
+static int32_t GetInt4(const unsigned char *MatcherTable, unsigned &Idx) {
+  int32_t Val = GetInt2(MatcherTable, Idx);
+  Val |= int32_t(GetInt2(MatcherTable, Idx)) << 16;
+  return Val;
+}
+
+static int64_t GetInt8(const unsigned char *MatcherTable, unsigned &Idx) {
+  int64_t Val = GetInt4(MatcherTable, Idx);
+  Val |= int64_t(GetInt4(MatcherTable, Idx)) << 32;
+  return Val;
+}
+
+enum BuiltinOpcodes {
+  OPC_Emit,
+  OPC_Push,
+  OPC_Record,
+  OPC_MoveChild,
+  OPC_MoveParent,
+  OPC_CheckSame,
+  OPC_CheckPatternPredicate,
+  OPC_CheckPredicate,
+  OPC_CheckOpcode,
+  OPC_CheckType,
+  OPC_CheckInteger1, OPC_CheckInteger2, OPC_CheckInteger4, OPC_CheckInteger8,
+  OPC_CheckCondCode,
+  OPC_CheckValueType,
+  OPC_CheckComplexPat,
+  OPC_CheckAndImm1, OPC_CheckAndImm2, OPC_CheckAndImm4, OPC_CheckAndImm8,
+  OPC_CheckOrImm1, OPC_CheckOrImm2, OPC_CheckOrImm4, OPC_CheckOrImm8
+};
+
+struct MatchScope {
+  /// FailIndex - If this match fails, this is the index to continue with.
+  unsigned FailIndex;
+  
+  /// NodeStackSize - The size of the node stack when the scope was formed.
+  unsigned NodeStackSize;
+  
+  /// NumRecordedNodes - The number of recorded nodes when the scope was formed.
+  unsigned NumRecordedNodes;
+};
+
+SDNode *SelectCodeCommon(SDNode *NodeToMatch, const unsigned char *MatcherTable,
+                         unsigned TableSize) {
+  switch (NodeToMatch->getOpcode()) {
+  default:
+    break;
+  case ISD::EntryToken:       // These nodes remain the same.
+  case ISD::BasicBlock:
+  case ISD::Register:
+  case ISD::HANDLENODE:
+  case ISD::TargetConstant:
+  case ISD::TargetConstantFP:
+  case ISD::TargetConstantPool:
+  case ISD::TargetFrameIndex:
+  case ISD::TargetExternalSymbol:
+  case ISD::TargetBlockAddress:
+  case ISD::TargetJumpTable:
+  case ISD::TargetGlobalTLSAddress:
+  case ISD::TargetGlobalAddress:
+  case ISD::TokenFactor:
+  case ISD::CopyFromReg:
+  case ISD::CopyToReg:
+    return 0;
+  case ISD::AssertSext:
+  case ISD::AssertZext:
+    ReplaceUses(SDValue(NodeToMatch, 0), NodeToMatch->getOperand(0));
+    return 0;
+  case ISD::INLINEASM: return Select_INLINEASM(NodeToMatch);
+  case ISD::EH_LABEL:  return Select_EH_LABEL(NodeToMatch);
+  case ISD::UNDEF:     return Select_UNDEF(NodeToMatch);
+  }
+  
+  assert(!NodeToMatch->isMachineOpcode() && "Node already selected!");
+  
+  SmallVector<MatchScope, 8> MatchScopes;
+  
+  // RecordedNodes - This is the set of nodes that have been recorded by the
+  // state machine.
+  SmallVector<SDValue, 8> RecordedNodes;
+  
+  // Set up the node stack with NodeToMatch as the only node on the stack.
+  SmallVector<SDValue, 8> NodeStack;
+  SDValue N = SDValue(NodeToMatch, 0);
+  NodeStack.push_back(N);
+  
+  // Interpreter starts at opcode #0.
+  unsigned MatcherIndex = 0;
+  while (1) {
+    assert(MatcherIndex < TableSize && "Invalid index");
+    switch ((BuiltinOpcodes)MatcherTable[MatcherIndex++]) {
+    case OPC_Emit: {
+      errs() << "EMIT NODE\n";
+      return 0;
+    }
+    case OPC_Push: {
+      unsigned NumToSkip = MatcherTable[MatcherIndex++];
+      MatchScope NewEntry;
+      NewEntry.FailIndex = MatcherIndex+NumToSkip;
+      NewEntry.NodeStackSize = NodeStack.size();
+      NewEntry.NumRecordedNodes = RecordedNodes.size();
+      MatchScopes.push_back(NewEntry);
+      continue;
+    }
+    case OPC_Record:
+      // Remember this node, it may end up being an operand in the pattern.
+      RecordedNodes.push_back(N);
+      continue;
+        
+    case OPC_MoveChild: {
+      unsigned Child = MatcherTable[MatcherIndex++];
+      if (Child >= N.getNumOperands())
+        break;  // Match fails if out of range child #.
+      N = N.getOperand(Child);
+      NodeStack.push_back(N);
+      continue;
+    }
+        
+    case OPC_MoveParent:
+      // Pop the current node off the NodeStack.
+      NodeStack.pop_back();
+      assert(!NodeStack.empty() && "Node stack imbalance!");
+      N = NodeStack.back();  
+      continue;
+     
+    case OPC_CheckSame: {
+      // Accept if it is exactly the same as a previously recorded node.
+      unsigned RecNo = MatcherTable[MatcherIndex++];
+      assert(RecNo < RecordedNodes.size() && "Invalid CheckSame");
+      if (N != RecordedNodes[RecNo]) break;
+      continue;
+    }
+    case OPC_CheckPatternPredicate: {
+      unsigned PredNo = MatcherTable[MatcherIndex++];
+      (void)PredNo;
+      // FIXME: CHECK IT.
+      continue;
+    }
+    case OPC_CheckPredicate: {
+      unsigned PredNo = MatcherTable[MatcherIndex++];
+      (void)PredNo;
+      // FIXME: CHECK IT.
+      continue;
+    }
+    case OPC_CheckComplexPat: {
+      unsigned PatNo = MatcherTable[MatcherIndex++];
+      (void)PatNo;
+      // FIXME: CHECK IT.
+      continue;
+    }
+        
+    case OPC_CheckOpcode:
+      if (N->getOpcode() != MatcherTable[MatcherIndex++]) break;
+      continue;
+    case OPC_CheckType:
+      if (N.getValueType() !=
+          (MVT::SimpleValueType)MatcherTable[MatcherIndex++]) break;
+      continue;
+    case OPC_CheckCondCode:
+      if (cast<CondCodeSDNode>(N)->get() !=
+          (ISD::CondCode)MatcherTable[MatcherIndex++]) break;
+      continue;
+    case OPC_CheckValueType:
+      if (cast<VTSDNode>(N)->getVT() !=
+          (MVT::SimpleValueType)MatcherTable[MatcherIndex++]) break;
+      continue;
+
+    case OPC_CheckInteger1:
+      if (CheckInteger(N, GetInt1(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckInteger2:
+      if (CheckInteger(N, GetInt2(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckInteger4:
+      if (CheckInteger(N, GetInt4(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckInteger8:
+      if (CheckInteger(N, GetInt8(MatcherTable, MatcherIndex))) break;
+      continue;
+        
+    case OPC_CheckAndImm1:
+      if (CheckAndImmediate(N, GetInt1(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckAndImm2:
+      if (CheckAndImmediate(N, GetInt2(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckAndImm4:
+      if (CheckAndImmediate(N, GetInt4(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckAndImm8:
+      if (CheckAndImmediate(N, GetInt8(MatcherTable, MatcherIndex))) break;
+      continue;
+
+    case OPC_CheckOrImm1:
+      if (CheckOrImmediate(N, GetInt1(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckOrImm2:
+      if (CheckOrImmediate(N, GetInt2(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckOrImm4:
+      if (CheckOrImmediate(N, GetInt4(MatcherTable, MatcherIndex))) break;
+      continue;
+    case OPC_CheckOrImm8:
+      if (CheckOrImmediate(N, GetInt8(MatcherTable, MatcherIndex))) break;
+      continue;
+    }
+    
+    // If the code reached this point, then the match failed pop out to the next
+    // match scope.
+    if (MatchScopes.empty()) {
+      CannotYetSelect(NodeToMatch);
+      return 0;
+    }
+    
+    RecordedNodes.resize(MatchScopes.back().NumRecordedNodes);
+    NodeStack.resize(MatchScopes.back().NodeStackSize);
+    MatcherIndex = MatchScopes.back().FailIndex;
+    MatchScopes.pop_back();
+  }
+}
+    
+
 #endif /* LLVM_CODEGEN_DAGISEL_HEADER_H */
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h b/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
index 460c3c7..d59e22a 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/DwarfWriter.h
@@ -76,11 +76,11 @@ public:
   
   /// BeginFunction - Gather pre-function debug information.  Assumes being 
   /// emitted immediately after the function entry point.
-  void BeginFunction(MachineFunction *MF);
+  void BeginFunction(const MachineFunction *MF);
   
   /// EndFunction - Gather and emit post-function debug information.
   ///
-  void EndFunction(MachineFunction *MF);
+  void EndFunction(const MachineFunction *MF);
 
   /// RecordSourceLine - Register a source line with debug info. Returns a
   /// unique label ID used to generate a label and provide correspondence to
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/FileWriters.h b/libclamav/c++/llvm/include/llvm/CodeGen/FileWriters.h
deleted file mode 100644
index 9dba838..0000000
--- a/libclamav/c++/llvm/include/llvm/CodeGen/FileWriters.h
+++ /dev/null
@@ -1,37 +0,0 @@
-//===-- FileWriters.h - File Writers Creation Functions ---------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// Functions to add the various file writer passes.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_CODEGEN_FILEWRITERS_H
-#define LLVM_CODEGEN_FILEWRITERS_H
-
-namespace llvm {
-
-  class PassManagerBase;
-  class ObjectCodeEmitter;
-  class TargetMachine;
-  class raw_ostream;
-  class formatted_raw_ostream;
-  class MachineFunctionPass;
-  class MCAsmInfo;
-  class MCCodeEmitter;
-
-  ObjectCodeEmitter *AddELFWriter(PassManagerBase &FPM, raw_ostream &O,
-                                  TargetMachine &TM);
-  MachineFunctionPass *createMachOWriter(formatted_raw_ostream &O,
-                                         TargetMachine &TM,
-                                         const MCAsmInfo *T, 
-                                         MCCodeEmitter *MCE);
-
-} // end llvm namespace
-
-#endif // LLVM_CODEGEN_FILEWRITERS_H
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
index 9c4e5b9..0a1d4f4 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/JITCodeEmitter.h
@@ -146,7 +146,7 @@ public:
     }
   }
 
-  /// emitAlignment - Move the CurBufferPtr pointer up the the specified
+  /// emitAlignment - Move the CurBufferPtr pointer up to the specified
   /// alignment (saturated to BufferEnd of course).
   void emitAlignment(unsigned Alignment) {
     if (Alignment == 0) Alignment = 1;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h b/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
index e31a7f0..512c94d 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/LiveInterval.h
@@ -320,7 +320,7 @@ namespace llvm {
     /// advanceTo - Advance the specified iterator to point to the LiveRange
     /// containing the specified position, or end() if the position is past the
     /// end of the interval.  If no LiveRange contains this position, but the
-    /// position is in a hole, this method returns an iterator pointing the the
+    /// position is in a hole, this method returns an iterator pointing to the
     /// LiveRange immediately after the hole.
     iterator advanceTo(iterator I, SlotIndex Pos) {
       if (Pos >= endIndex())
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
index 283322b..db82ba5 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineBasicBlock.h
@@ -21,6 +21,9 @@ namespace llvm {
 
 class BasicBlock;
 class MachineFunction;
+class MCContext;
+class MCSymbol;
+class StringRef;
 class raw_ostream;
 
 template <>
@@ -338,7 +341,7 @@ public:
                             bool isCond);
 
   /// findDebugLoc - find the next valid DebugLoc starting at MBBI, skipping
-  /// any DEBUG_VALUE instructions.  Return UnknownLoc if there is none.
+  /// any DBG_VALUE instructions.  Return UnknownLoc if there is none.
   DebugLoc findDebugLoc(MachineBasicBlock::iterator &MBBI);
 
   // Debugging methods.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
index d598a93..48b4082 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineCodeEmitter.h
@@ -155,7 +155,7 @@ public:
     }
   }
 
-  /// emitAlignment - Move the CurBufferPtr pointer up the the specified
+  /// emitAlignment - Move the CurBufferPtr pointer up to the specified
   /// alignment (saturated to BufferEnd of course).
   void emitAlignment(unsigned Alignment) {
     if (Alignment == 0) Alignment = 1;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineConstantPool.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineConstantPool.h
index 8d6c1d1..e6698a5 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineConstantPool.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineConstantPool.h
@@ -136,7 +136,7 @@ public:
     : TD(td), PoolAlignment(1) {}
   ~MachineConstantPool();
     
-  /// getConstantPoolAlignment - Return the the alignment required by
+  /// getConstantPoolAlignment - Return the alignment required by
   /// the whole constant pool, of which the first element must be aligned.
   unsigned getConstantPoolAlignment() const { return PoolAlignment; }
   
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
index 968e4ea..043e97f 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFrameInfo.h
@@ -276,6 +276,7 @@ public:
     assert(unsigned(ObjectIdx+NumFixedObjects) < Objects.size() &&
            "Invalid Object Idx!");
     Objects[ObjectIdx+NumFixedObjects].Alignment = Align;
+    MaxAlignment = std::max(MaxAlignment, Align);
   }
 
   /// getObjectOffset - Return the assigned stack offset of the specified object
@@ -328,19 +329,6 @@ public:
   ///
   void setMaxAlignment(unsigned Align) { MaxAlignment = Align; }
 
-  /// calculateMaxStackAlignment() - If there is a local object which requires
-  /// greater alignment than the current max alignment, adjust accordingly.
-  void calculateMaxStackAlignment() {
-    for (int i = getObjectIndexBegin(),
-         e = getObjectIndexEnd(); i != e; ++i) {
-      if (isDeadObjectIndex(i))
-        continue;
-
-      unsigned Align = getObjectAlignment(i);
-      MaxAlignment = std::max(MaxAlignment, Align);
-    }
-  }
-
   /// hasCalls - Return true if the current function has no function calls.
   /// This is only valid during or after prolog/epilog code emission.
   ///
@@ -402,6 +390,7 @@ public:
     Objects.push_back(StackObject(Size, Alignment, 0, false, isSS));
     int Index = (int)Objects.size()-NumFixedObjects-1;
     assert(Index >= 0 && "Bad frame index!");
+    MaxAlignment = std::max(MaxAlignment, Alignment);
     return Index;
   }
 
@@ -412,6 +401,7 @@ public:
   int CreateSpillStackObject(uint64_t Size, unsigned Alignment) {
     CreateStackObject(Size, Alignment, true);
     int Index = (int)Objects.size()-NumFixedObjects-1;
+    MaxAlignment = std::max(MaxAlignment, Alignment);
     return Index;
   }
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
index 253c124..3c5b466 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineFunction.h
@@ -33,6 +33,7 @@ class MachineRegisterInfo;
 class MachineFrameInfo;
 class MachineConstantPool;
 class MachineJumpTableInfo;
+class Pass;
 class TargetMachine;
 class TargetRegisterClass;
 
@@ -177,6 +178,11 @@ public:
   ///
   void setAlignment(unsigned A) { Alignment = A; }
 
+  /// EnsureAlignment - Make sure the function is at least 'A' bits aligned.
+  void EnsureAlignment(unsigned A) {
+    if (Alignment < A) Alignment = A;
+  }
+  
   /// getInfo - Keep track of various per-function pieces of information for
   /// backends that would like to do so.
   ///
@@ -324,7 +330,7 @@ public:
                                    bool NoImp = false);
 
   /// CloneMachineInstr - Create a new MachineInstr which is a copy of the
-  /// 'Orig' instruction, identical in all ways except the the instruction
+  /// 'Orig' instruction, identical in all ways except the instruction
   /// has no parent, prev, or next.
   ///
   /// See also TargetInstrInfo::duplicate() for target-specific fixes to cloned
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
index c2a0578..6e33fb3 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstr.h
@@ -19,9 +19,9 @@
 #include "llvm/ADT/ilist.h"
 #include "llvm/ADT/ilist_node.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/CodeGen/MachineOperand.h"
 #include "llvm/Target/TargetInstrDesc.h"
+#include "llvm/Target/TargetOpcodes.h"
 #include "llvm/Support/DebugLoc.h"
 #include <vector>
 
@@ -41,6 +41,14 @@ class MachineInstr : public ilist_node<MachineInstr> {
 public:
   typedef MachineMemOperand **mmo_iterator;
 
+  /// Flags to specify different kinds of comments to output in
+  /// assembly code.  These flags carry semantic information not
+  /// otherwise easily derivable from the IR text.
+  ///
+  enum CommentFlag {
+    ReloadReuse = 0x1
+  };
+  
 private:
   const TargetInstrDesc *TID;           // Instruction descriptor.
   unsigned short NumImplicitOps;        // Number of implicit operands (which
@@ -121,14 +129,14 @@ public:
 
   /// getAsmPrinterFlag - Return whether an AsmPrinter flag is set.
   ///
-  bool getAsmPrinterFlag(AsmPrinter::CommentFlag Flag) const {
+  bool getAsmPrinterFlag(CommentFlag Flag) const {
     return AsmPrinterFlags & Flag;
   }
 
   /// setAsmPrinterFlag - Set a flag for the AsmPrinter.
   ///
-  void setAsmPrinterFlag(unsigned short Flag) {
-    AsmPrinterFlags |= Flag;
+  void setAsmPrinterFlag(CommentFlag Flag) {
+    AsmPrinterFlags |= (unsigned short)Flag;
   }
 
   /// getDebugLoc - Returns the debug location id of this MachineInstr.
@@ -193,12 +201,31 @@ public:
 
   /// isLabel - Returns true if the MachineInstr represents a label.
   ///
-  bool isLabel() const;
-
-  /// isDebugLabel - Returns true if the MachineInstr represents a debug label.
-  ///
-  bool isDebugLabel() const;
-
+  bool isLabel() const {
+    return getOpcode() == TargetOpcode::DBG_LABEL ||
+           getOpcode() == TargetOpcode::EH_LABEL ||
+           getOpcode() == TargetOpcode::GC_LABEL;
+  }
+  
+  bool isDebugLabel() const { return getOpcode() == TargetOpcode::DBG_LABEL; }
+  bool isEHLabel() const { return getOpcode() == TargetOpcode::EH_LABEL; }
+  bool isGCLabel() const { return getOpcode() == TargetOpcode::GC_LABEL; }
+  bool isDebugValue() const { return getOpcode() == TargetOpcode::DBG_VALUE; }
+  
+  bool isPHI() const { return getOpcode() == TargetOpcode::PHI; }
+  bool isKill() const { return getOpcode() == TargetOpcode::KILL; }
+  bool isImplicitDef() const { return getOpcode()==TargetOpcode::IMPLICIT_DEF; }
+  bool isInlineAsm() const { return getOpcode() == TargetOpcode::INLINEASM; }
+  bool isExtractSubreg() const {
+    return getOpcode() == TargetOpcode::EXTRACT_SUBREG;
+  }
+  bool isInsertSubreg() const {
+    return getOpcode() == TargetOpcode::INSERT_SUBREG;
+  }
+  bool isSubregToReg() const {
+    return getOpcode() == TargetOpcode::SUBREG_TO_REG;
+  }
+  
   /// readsRegister - Return true if the MachineInstr reads the specified
   /// register. If TargetRegisterInfo is passed, then it also checks if there
   /// is a read of a super-register.
@@ -320,7 +347,7 @@ public:
 
   /// isInvariantLoad - Return true if this instruction is loading from a
   /// location whose value is invariant across the function.  For example,
-  /// loading a value from the constant pool or from from the argument area of
+  /// loading a value from the constant pool or from the argument area of
   /// a function if it does not change.  This should only return true of *all*
   /// loads the instruction does are invariant (if it does multiple loads).
   bool isInvariantLoad(AliasAnalysis *AA) const;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
index 8eb0add..a263a97 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineInstrBuilder.h
@@ -32,6 +32,7 @@ namespace RegState {
     Dead           = 0x10,
     Undef          = 0x20,
     EarlyClobber   = 0x40,
+    Debug          = 0x80,
     ImplicitDefine = Implicit | Define,
     ImplicitKill   = Implicit | Kill
   };
@@ -62,7 +63,8 @@ public:
                                              flags & RegState::Dead,
                                              flags & RegState::Undef,
                                              flags & RegState::EarlyClobber,
-                                             SubReg));
+                                             SubReg,
+                                             flags & RegState::Debug));
     return *this;
   }
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
index d365029..556ba7f 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfo.h
@@ -113,7 +113,14 @@ class MachineModuleInfo : public ImmutablePass {
   // LandingPads - List of LandingPadInfo describing the landing pad information
   // in the current function.
   std::vector<LandingPadInfo> LandingPads;
-  
+
+  // Map of invoke call site index values to associated begin EH_LABEL for
+  // the current function.
+  DenseMap<unsigned, unsigned> CallSiteMap;
+
+  // The current call site index being processed, if any. 0 if none.
+  unsigned CurCallSite;
+
   // TypeInfos - List of C++ TypeInfo used in the current function.
   //
   std::vector<GlobalVariable *> TypeInfos;
@@ -157,10 +164,6 @@ public:
   bool doInitialization();
   bool doFinalization();
 
-  /// BeginFunction - Begin gathering function meta information.
-  ///
-  void BeginFunction(MachineFunction *) {}
-  
   /// EndFunction - Discard function meta information.
   ///
   void EndFunction();
@@ -298,7 +301,26 @@ public:
   const std::vector<LandingPadInfo> &getLandingPads() const {
     return LandingPads;
   }
-  
+
+  /// setCallSiteBeginLabel - Map the begin label for a call site
+  void setCallSiteBeginLabel(unsigned BeginLabel, unsigned Site) {
+    CallSiteMap[BeginLabel] = Site;
+  }
+
+  /// getCallSiteBeginLabel - Get the call site number for a begin label
+  unsigned getCallSiteBeginLabel(unsigned BeginLabel) {
+    assert(CallSiteMap.count(BeginLabel) &&
+           "Missing call site number for EH_LABEL!");
+    return CallSiteMap[BeginLabel];
+  }
+
+  /// setCurrentCallSite - Set the call site currently being processed.
+  void setCurrentCallSite(unsigned Site) { CurCallSite = Site; }
+
+  /// getCurrentCallSite - Get the call site currently being processed, if any.
+  /// return zero if none.
+  unsigned getCurrentCallSite(void) { return CurCallSite; }
+
   /// getTypeInfos - Return a reference to the C++ typeinfo for the current
   /// function.
   const std::vector<GlobalVariable *> &getTypeInfos() const {
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfoImpls.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfoImpls.h
index 44813cb..6679990 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfoImpls.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineModuleInfoImpls.h
@@ -25,39 +25,38 @@ namespace llvm {
   class MachineModuleInfoMachO : public MachineModuleInfoImpl {
     /// FnStubs - Darwin '$stub' stubs.  The key is something like "Lfoo$stub",
     /// the value is something like "_foo".
-    DenseMap<const MCSymbol*, const MCSymbol*> FnStubs;
+    DenseMap<MCSymbol*, MCSymbol*> FnStubs;
     
     /// GVStubs - Darwin '$non_lazy_ptr' stubs.  The key is something like
     /// "Lfoo$non_lazy_ptr", the value is something like "_foo".
-    DenseMap<const MCSymbol*, const MCSymbol*> GVStubs;
+    DenseMap<MCSymbol*, MCSymbol*> GVStubs;
     
     /// HiddenGVStubs - Darwin '$non_lazy_ptr' stubs.  The key is something like
     /// "Lfoo$non_lazy_ptr", the value is something like "_foo".  Unlike GVStubs
     /// these are for things with hidden visibility.
-    DenseMap<const MCSymbol*, const MCSymbol*> HiddenGVStubs;
+    DenseMap<MCSymbol*, MCSymbol*> HiddenGVStubs;
     
     virtual void Anchor();  // Out of line virtual method.
   public:
     MachineModuleInfoMachO(const MachineModuleInfo &) {}
     
-    const MCSymbol *&getFnStubEntry(const MCSymbol *Sym) {
+    MCSymbol *&getFnStubEntry(MCSymbol *Sym) {
       assert(Sym && "Key cannot be null");
       return FnStubs[Sym];
     }
 
-    const MCSymbol *&getGVStubEntry(const MCSymbol *Sym) {
+    MCSymbol *&getGVStubEntry(MCSymbol *Sym) {
       assert(Sym && "Key cannot be null");
       return GVStubs[Sym];
     }
 
-    const MCSymbol *&getHiddenGVStubEntry(const MCSymbol *Sym) {
+    MCSymbol *&getHiddenGVStubEntry(MCSymbol *Sym) {
       assert(Sym && "Key cannot be null");
       return HiddenGVStubs[Sym];
     }
     
     /// Accessor methods to return the set of stubs in sorted order.
-    typedef std::vector<std::pair<const MCSymbol*, const MCSymbol*> >
-      SymbolListTy;
+    typedef std::vector<std::pair<MCSymbol*, MCSymbol*> > SymbolListTy;
     
     SymbolListTy GetFnStubList() const {
       return GetSortedStubs(FnStubs);
@@ -71,7 +70,7 @@ namespace llvm {
     
   private:
     static SymbolListTy
-    GetSortedStubs(const DenseMap<const MCSymbol*, const MCSymbol*> &Map);
+    GetSortedStubs(const DenseMap<MCSymbol*, MCSymbol*> &Map);
   };
   
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
index 07d886d..dac0092 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineOperand.h
@@ -87,6 +87,10 @@ private:
   /// model the GCC inline asm '&' constraint modifier.
   bool IsEarlyClobber : 1;
 
+  /// IsDebug - True if this MO_Register 'use' operand is in a debug pseudo,
+  /// not a real instruction.  Such uses should be ignored during codegen.
+  bool IsDebug : 1;
+
   /// ParentMI - This is the instruction that this operand is embedded into. 
   /// This is valid for all operand types, when the operand is in an instr.
   MachineInstr *ParentMI;
@@ -214,6 +218,11 @@ public:
     return IsEarlyClobber;
   }
 
+  bool isDebug() const {
+    assert(isReg() && "Wrong MachineOperand accessor");
+    return IsDebug;
+  }
+
   /// getNextOperandForReg - Return the next MachineOperand in the function that
   /// uses or defines this register.
   MachineOperand *getNextOperandForReg() const {
@@ -236,11 +245,13 @@ public:
   
   void setIsUse(bool Val = true) {
     assert(isReg() && "Wrong MachineOperand accessor");
+    assert((Val || !isDebug()) && "Marking a debug operation as def");
     IsDef = !Val;
   }
   
   void setIsDef(bool Val = true) {
     assert(isReg() && "Wrong MachineOperand accessor");
+    assert((!Val || !isDebug()) && "Marking a debug operation as def");
     IsDef = Val;
   }
 
@@ -251,6 +262,7 @@ public:
 
   void setIsKill(bool Val = true) {
     assert(isReg() && !IsDef && "Wrong MachineOperand accessor");
+    assert((!Val || !isDebug()) && "Marking a debug operation as kill");
     IsKill = Val;
   }
   
@@ -366,7 +378,7 @@ public:
   /// the setReg method should be used.
   void ChangeToRegister(unsigned Reg, bool isDef, bool isImp = false,
                         bool isKill = false, bool isDead = false,
-                        bool isUndef = false);
+                        bool isUndef = false, bool isDebug = false);
   
   //===--------------------------------------------------------------------===//
   // Construction methods.
@@ -388,7 +400,8 @@ public:
                                   bool isKill = false, bool isDead = false,
                                   bool isUndef = false,
                                   bool isEarlyClobber = false,
-                                  unsigned SubReg = 0) {
+                                  unsigned SubReg = 0,
+                                  bool isDebug = false) {
     MachineOperand Op(MachineOperand::MO_Register);
     Op.IsDef = isDef;
     Op.IsImp = isImp;
@@ -396,6 +409,7 @@ public:
     Op.IsDead = isDead;
     Op.IsUndef = isUndef;
     Op.IsEarlyClobber = isEarlyClobber;
+    Op.IsDebug = isDebug;
     Op.Contents.Reg.RegNo = Reg;
     Op.Contents.Reg.Prev = 0;
     Op.Contents.Reg.Next = 0;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
index c55cb32..01dc018 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRegisterInfo.h
@@ -78,12 +78,12 @@ public:
   /// reg_begin/reg_end - Provide iteration support to walk over all definitions
   /// and uses of a register within the MachineFunction that corresponds to this
   /// MachineRegisterInfo object.
-  template<bool Uses, bool Defs>
+  template<bool Uses, bool Defs, bool SkipDebug>
   class defusechain_iterator;
 
   /// reg_iterator/reg_begin/reg_end - Walk all defs and uses of the specified
   /// register.
-  typedef defusechain_iterator<true,true> reg_iterator;
+  typedef defusechain_iterator<true,true,false> reg_iterator;
   reg_iterator reg_begin(unsigned RegNo) const {
     return reg_iterator(getRegUseDefListHead(RegNo));
   }
@@ -94,7 +94,7 @@ public:
   bool reg_empty(unsigned RegNo) const { return reg_begin(RegNo) == reg_end(); }
 
   /// def_iterator/def_begin/def_end - Walk all defs of the specified register.
-  typedef defusechain_iterator<false,true> def_iterator;
+  typedef defusechain_iterator<false,true,false> def_iterator;
   def_iterator def_begin(unsigned RegNo) const {
     return def_iterator(getRegUseDefListHead(RegNo));
   }
@@ -105,7 +105,7 @@ public:
   bool def_empty(unsigned RegNo) const { return def_begin(RegNo) == def_end(); }
 
   /// use_iterator/use_begin/use_end - Walk all uses of the specified register.
-  typedef defusechain_iterator<true,false> use_iterator;
+  typedef defusechain_iterator<true,false,false> use_iterator;
   use_iterator use_begin(unsigned RegNo) const {
     return use_iterator(getRegUseDefListHead(RegNo));
   }
@@ -115,7 +115,20 @@ public:
   /// register.
   bool use_empty(unsigned RegNo) const { return use_begin(RegNo) == use_end(); }
 
+  /// use_nodbg_iterator/use_nodbg_begin/use_nodbg_end - Walk all uses of the
+  /// specified register, skipping those marked as Debug.
+  typedef defusechain_iterator<true,false,true> use_nodbg_iterator;
+  use_nodbg_iterator use_nodbg_begin(unsigned RegNo) const {
+    return use_nodbg_iterator(getRegUseDefListHead(RegNo));
+  }
+  static use_nodbg_iterator use_nodbg_end() { return use_nodbg_iterator(0); }
   
+  /// use_nodbg_empty - Return true if there are no non-Debug instructions
+  /// using the specified register.
+  bool use_nodbg_empty(unsigned RegNo) const {
+    return use_nodbg_begin(RegNo) == use_nodbg_end();
+  }
+
   /// replaceRegWith - Replace all instances of FromReg with ToReg in the
   /// machine function.  This is like llvm-level X->replaceAllUsesWith(Y),
   /// except that it also changes any definitions of the register as well.
@@ -258,8 +271,9 @@ public:
   /// operands in the function that use or define a specific register.  If
   /// ReturnUses is true it returns uses of registers, if ReturnDefs is true it
   /// returns defs.  If neither are true then you are silly and it always
-  /// returns end().
-  template<bool ReturnUses, bool ReturnDefs>
+  /// returns end().  If SkipDebug is true it skips uses marked Debug
+  /// when incrementing.
+  template<bool ReturnUses, bool ReturnDefs, bool SkipDebug>
   class defusechain_iterator
     : public std::iterator<std::forward_iterator_tag, MachineInstr, ptrdiff_t> {
     MachineOperand *Op;
@@ -268,7 +282,8 @@ public:
       // we are interested in.
       if (op) {
         if ((!ReturnUses && op->isUse()) ||
-            (!ReturnDefs && op->isDef()))
+            (!ReturnDefs && op->isDef()) ||
+            (SkipDebug && op->isDebug()))
           ++*this;
       }
     }
@@ -299,7 +314,8 @@ public:
       
       // If this is an operand we don't care about, skip it.
       while (Op && ((!ReturnUses && Op->isUse()) || 
-                    (!ReturnDefs && Op->isDef())))
+                    (!ReturnDefs && Op->isDef()) ||
+                    (SkipDebug && Op->isDebug())))
         Op = Op->getNextOperandForReg();
       
       return *this;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
index 1c15fab..c316785 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/MachineRelocation.h
@@ -138,14 +138,15 @@ public:
   ///
   static MachineRelocation getExtSym(uintptr_t offset, unsigned RelocationType, 
                                      const char *ES, intptr_t cst = 0,
-                                     bool GOTrelative = 0) {
+                                     bool GOTrelative = 0,
+                                     bool NeedStub = true) {
     assert((RelocationType & ~63) == 0 && "Relocation type too large!");
     MachineRelocation Result;
     Result.Offset = offset;
     Result.ConstantVal = cst;
     Result.TargetReloType = RelocationType;
     Result.AddrType = isExtSym;
-    Result.MayNeedFarStub = true;
+    Result.MayNeedFarStub = NeedStub;
     Result.GOTRelative = GOTrelative;
     Result.TargetResolve = false;
     Result.Target.ExtSym = ES;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ObjectCodeEmitter.h b/libclamav/c++/llvm/include/llvm/CodeGen/ObjectCodeEmitter.h
index 8252e07..170c0c8 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ObjectCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ObjectCodeEmitter.h
@@ -81,7 +81,7 @@ public:
   /// written to the data stream in big-endian format.
   void emitDWordBE(uint64_t W);
 
-  /// emitAlignment - Move the CurBufferPtr pointer up the the specified
+  /// emitAlignment - Move the CurBufferPtr pointer up to the specified
   /// alignment (saturated to BufferEnd of course).
   void emitAlignment(unsigned Alignment = 0, uint8_t fill = 0);
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
index 7e0da3f..dbc73cb 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/Passes.h
@@ -174,6 +174,10 @@ namespace llvm {
   /// optimization by increasing uses of extended values.
   FunctionPass *createOptimizeExtsPass();
 
+  /// createOptimizePHIsPass - This pass optimizes machine instruction PHIs
+  /// to take advantage of opportunities created during DAG legalization.
+  FunctionPass *createOptimizePHIsPass();
+
   /// createStackSlotColoringPass - This pass performs stack slot coloring.
   FunctionPass *createStackSlotColoringPass(bool);
 
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
index 33ebd00..60014f8 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAG.h
@@ -841,7 +841,7 @@ public:
   }
 
   /// AssignOrdering - Assign an order to the SDNode.
-  void AssignOrdering(SDNode *SD, unsigned Order);
+  void AssignOrdering(const SDNode *SD, unsigned Order);
 
   /// GetOrdering - Get the order for the SDNode.
   unsigned GetOrdering(const SDNode *SD) const;
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
index 45a9d40..6ba2d3b 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -609,7 +609,7 @@ namespace ISD {
   /// which do not reference a specific memory location should be less than
   /// this value. Those that do must not be less than this value, and can
   /// be used with SelectionDAG::getMemIntrinsicNode.
-  static const int FIRST_TARGET_MEMORY_OPCODE = 1 << 14;
+  static const int FIRST_TARGET_MEMORY_OPCODE = BUILTIN_OP_END+80;
 
   /// Node predicates
 
@@ -821,6 +821,8 @@ public:
   /// set the SDNode
   void setNode(SDNode *N) { Node = N; }
 
+  inline SDNode *operator->() const { return Node; }
+  
   bool operator==(const SDValue &O) const {
     return Node == O.Node && ResNo == O.ResNo;
   }
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
index 163642a..dd4caba 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/SlotIndexes.h
@@ -72,10 +72,13 @@ namespace llvm {
       }
     }
 
+    bool isValid() const {
+      return (index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX);
+    }
+
     MachineInstr* getInstr() const { return mi; }
     void setInstr(MachineInstr *mi) {
-      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
-             "Attempt to modify reserved index.");
+      assert(isValid() && "Attempt to modify reserved index.");
       this->mi = mi;
     }
 
@@ -83,25 +86,21 @@ namespace llvm {
     void setIndex(unsigned index) {
       assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
              "Attempt to set index to invalid value.");
-      assert(this->index != EMPTY_KEY_INDEX &&
-             this->index != TOMBSTONE_KEY_INDEX &&
-             "Attempt to reset reserved index value.");
+      assert(isValid() && "Attempt to reset reserved index value.");
       this->index = index;
     }
     
     IndexListEntry* getNext() { return next; }
     const IndexListEntry* getNext() const { return next; }
     void setNext(IndexListEntry *next) {
-      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
-             "Attempt to modify reserved index.");
+      assert(isValid() && "Attempt to modify reserved index.");
       this->next = next;
     }
 
     IndexListEntry* getPrev() { return prev; }
     const IndexListEntry* getPrev() const { return prev; }
     void setPrev(IndexListEntry *prev) {
-      assert(index != EMPTY_KEY_INDEX && index != TOMBSTONE_KEY_INDEX &&
-             "Attempt to modify reserved index.");
+      assert(isValid() && "Attempt to modify reserved index.");
       this->prev = prev;
     }
 
@@ -192,7 +191,8 @@ namespace llvm {
     /// Returns true if this is a valid index. Invalid indicies do
     /// not point into an index table, and cannot be compared.
     bool isValid() const {
-      return (lie.getPointer() != 0) && (lie.getPointer()->getIndex() != 0);
+      IndexListEntry *entry = lie.getPointer();
+      return ((entry!= 0) && (entry->isValid()));
     }
 
     /// Print this index to the given raw_ostream.
diff --git a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
index 0125190..a7aafc0 100644
--- a/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
+++ b/libclamav/c++/llvm/include/llvm/CodeGen/ValueTypes.h
@@ -492,26 +492,31 @@ namespace llvm {
 
     /// bitsEq - Return true if this has the same number of bits as VT.
     bool bitsEq(EVT VT) const {
+      if (EVT::operator==(VT)) return true;
       return getSizeInBits() == VT.getSizeInBits();
     }
 
     /// bitsGT - Return true if this has more bits than VT.
     bool bitsGT(EVT VT) const {
+      if (EVT::operator==(VT)) return false;
       return getSizeInBits() > VT.getSizeInBits();
     }
 
     /// bitsGE - Return true if this has no less bits than VT.
     bool bitsGE(EVT VT) const {
+      if (EVT::operator==(VT)) return true;
       return getSizeInBits() >= VT.getSizeInBits();
     }
 
     /// bitsLT - Return true if this has less bits than VT.
     bool bitsLT(EVT VT) const {
+      if (EVT::operator==(VT)) return false;
       return getSizeInBits() < VT.getSizeInBits();
     }
 
     /// bitsLE - Return true if this has no more bits than VT.
     bool bitsLE(EVT VT) const {
+      if (EVT::operator==(VT)) return true;
       return getSizeInBits() <= VT.getSizeInBits();
     }
 
diff --git a/libclamav/c++/llvm/include/llvm/Constant.h b/libclamav/c++/llvm/include/llvm/Constant.h
index 8072fd9..8647299 100644
--- a/libclamav/c++/llvm/include/llvm/Constant.h
+++ b/libclamav/c++/llvm/include/llvm/Constant.h
@@ -104,8 +104,7 @@ public:
   /// type, returns the elements of the vector in the specified smallvector.
   /// This handles breaking down a vector undef into undef elements, etc.  For
   /// constant exprs and other cases we can't handle, we return an empty vector.
-  void getVectorElements(LLVMContext &Context, 
-                         SmallVectorImpl<Constant*> &Elts) const;
+  void getVectorElements(SmallVectorImpl<Constant*> &Elts) const;
 
   /// destroyConstant - Called if some element of this constant is no longer
   /// valid.  At this point only other constants may be on the use_list for this
diff --git a/libclamav/c++/llvm/include/llvm/Constants.h b/libclamav/c++/llvm/include/llvm/Constants.h
index f34f9cb..bd14303 100644
--- a/libclamav/c++/llvm/include/llvm/Constants.h
+++ b/libclamav/c++/llvm/include/llvm/Constants.h
@@ -33,6 +33,7 @@ namespace llvm {
 class ArrayType;
 class IntegerType;
 class StructType;
+class UnionType;
 class PointerType;
 class VectorType;
 
@@ -453,6 +454,50 @@ struct OperandTraits<ConstantStruct> : public VariadicOperandTraits<> {
 DEFINE_TRANSPARENT_CASTED_OPERAND_ACCESSORS(ConstantStruct, Constant)
 
 //===----------------------------------------------------------------------===//
+// ConstantUnion - Constant Union Declarations
+//
+class ConstantUnion : public Constant {
+  friend struct ConstantCreator<ConstantUnion, UnionType, Constant*>;
+  ConstantUnion(const ConstantUnion &);      // DO NOT IMPLEMENT
+protected:
+  ConstantUnion(const UnionType *T, Constant* Val);
+public:
+  // ConstantUnion accessors
+  static Constant *get(const UnionType *T, Constant* V);
+
+  /// Transparently provide more efficient getOperand methods.
+  DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Constant);
+  
+  /// getType() specialization - Reduce amount of casting...
+  ///
+  inline const UnionType *getType() const {
+    return reinterpret_cast<const UnionType*>(Value::getType());
+  }
+
+  /// isNullValue - Return true if this is the value that would be returned by
+  /// getNullValue.  This always returns false because zero structs are always
+  /// created as ConstantAggregateZero objects.
+  virtual bool isNullValue() const {
+    return false;
+  }
+
+  virtual void destroyConstant();
+  virtual void replaceUsesOfWithOnConstant(Value *From, Value *To, Use *U);
+
+  /// Methods for support type inquiry through isa, cast, and dyn_cast:
+  static inline bool classof(const ConstantUnion *) { return true; }
+  static bool classof(const Value *V) {
+    return V->getValueID() == ConstantUnionVal;
+  }
+};
+
+template <>
+struct OperandTraits<ConstantUnion> : public FixedNumOperandTraits<1> {
+};
+
+DEFINE_TRANSPARENT_CASTED_OPERAND_ACCESSORS(ConstantUnion, Constant)
+
+//===----------------------------------------------------------------------===//
 /// ConstantVector - Constant Vector Declarations
 ///
 class ConstantVector : public Constant {
@@ -644,8 +689,7 @@ public:
   ///
 
   /// getAlignOf constant expr - computes the alignment of a type in a target
-  /// independent way (Note: the return type is an i32; Note: assumes that i8
-  /// is byte aligned).
+  /// independent way (Note: the return type is an i64).
   static Constant *getAlignOf(const Type* Ty);
   
   /// getSizeOf constant expr - computes the size of a type in a target
@@ -653,10 +697,15 @@ public:
   ///
   static Constant *getSizeOf(const Type* Ty);
 
-  /// getOffsetOf constant expr - computes the offset of a field in a target
-  /// independent way (Note: the return type is an i64).
+  /// getOffsetOf constant expr - computes the offset of a struct field in a 
+  /// target independent way (Note: the return type is an i64).
+  ///
+  static Constant *getOffsetOf(const StructType* STy, unsigned FieldNo);
+
+  /// getOffsetOf constant expr - This is a generalized form of getOffsetOf,
+  /// which supports any aggregate type, and any Constant index.
   ///
-  static Constant *getOffsetOf(const StructType* Ty, unsigned FieldNo);
+  static Constant *getOffsetOf(const Type* Ty, Constant *FieldNo);
   
   static Constant *getNeg(Constant *C);
   static Constant *getFNeg(Constant *C);
@@ -693,9 +742,13 @@ public:
   static Constant *getBitCast (Constant *C, const Type *Ty);
 
   static Constant *getNSWNeg(Constant *C);
+  static Constant *getNUWNeg(Constant *C);
   static Constant *getNSWAdd(Constant *C1, Constant *C2);
+  static Constant *getNUWAdd(Constant *C1, Constant *C2);
   static Constant *getNSWSub(Constant *C1, Constant *C2);
+  static Constant *getNUWSub(Constant *C1, Constant *C2);
   static Constant *getNSWMul(Constant *C1, Constant *C2);
+  static Constant *getNUWMul(Constant *C1, Constant *C2);
   static Constant *getExactSDiv(Constant *C1, Constant *C2);
 
   /// Transparently provide more efficient getOperand methods.
diff --git a/libclamav/c++/llvm/include/llvm/DerivedTypes.h b/libclamav/c++/llvm/include/llvm/DerivedTypes.h
index c220608..912bb6d 100644
--- a/libclamav/c++/llvm/include/llvm/DerivedTypes.h
+++ b/libclamav/c++/llvm/include/llvm/DerivedTypes.h
@@ -27,6 +27,7 @@ template<class ValType, class TypeClass> class TypeMap;
 class FunctionValType;
 class ArrayValType;
 class StructValType;
+class UnionValType;
 class PointerValType;
 class VectorValType;
 class IntegerValType;
@@ -229,7 +230,8 @@ public:
     return T->getTypeID() == ArrayTyID ||
            T->getTypeID() == StructTyID ||
            T->getTypeID() == PointerTyID ||
-           T->getTypeID() == VectorTyID;
+           T->getTypeID() == VectorTyID ||
+           T->getTypeID() == UnionTyID;
   }
 };
 
@@ -301,6 +303,63 @@ public:
 };
 
 
+/// UnionType - Class to represent union types. A union type is similar to
+/// a structure, except that all member fields begin at offset 0.
+///
+class UnionType : public CompositeType {
+  friend class TypeMap<UnionValType, UnionType>;
+  UnionType(const UnionType &);                   // Do not implement
+  const UnionType &operator=(const UnionType &);  // Do not implement
+  UnionType(LLVMContext &C, const Type* const* Types, unsigned NumTypes);
+public:
+  /// UnionType::get - This static method is the primary way to create a
+  /// UnionType.
+  static UnionType *get(const Type* const* Types, unsigned NumTypes);
+
+  /// UnionType::get - This static method is a convenience method for
+  /// creating union types by specifying the elements as arguments.
+  static UnionType *get(const Type *type, ...) END_WITH_NULL;
+
+  /// isValidElementType - Return true if the specified type is valid as a
+  /// element type.
+  static bool isValidElementType(const Type *ElemTy);
+  
+  /// Given an element type, return the member index of that type, or -1
+  /// if there is no such member type.
+  int getElementTypeIndex(const Type *ElemTy) const;
+
+  // Iterator access to the elements
+  typedef Type::subtype_iterator element_iterator;
+  element_iterator element_begin() const { return ContainedTys; }
+  element_iterator element_end() const { return &ContainedTys[NumContainedTys];}
+
+  // Random access to the elements
+  unsigned getNumElements() const { return NumContainedTys; }
+  const Type *getElementType(unsigned N) const {
+    assert(N < NumContainedTys && "Element number out of range!");
+    return ContainedTys[N];
+  }
+
+  /// getTypeAtIndex - Given an index value into the type, return the type of
+  /// the element.  For a union type, this must be a constant value...
+  ///
+  virtual const Type *getTypeAtIndex(const Value *V) const;
+  virtual const Type *getTypeAtIndex(unsigned Idx) const;
+  virtual bool indexValid(const Value *V) const;
+  virtual bool indexValid(unsigned Idx) const;
+
+  // Implement the AbstractTypeUser interface.
+  virtual void refineAbstractType(const DerivedType *OldTy, const Type *NewTy);
+  virtual void typeBecameConcrete(const DerivedType *AbsTy);
+
+  // Methods for support type inquiry through isa, cast, and dyn_cast:
+  static inline bool classof(const UnionType *) { return true; }
+  static inline bool classof(const Type *T) {
+    return T->getTypeID() == UnionTyID;
+  }
+};
+
+
 /// SequentialType - This is the superclass of the array, pointer and vector
 /// type classes.  All of these represent "arrays" in memory.  The array type
 /// represents a specifically sized array, pointer types are unsized/unknown
@@ -496,6 +555,7 @@ public:
 /// OpaqueType - Class to represent abstract types
 ///
 class OpaqueType : public DerivedType {
+  friend class LLVMContextImpl;
   OpaqueType(const OpaqueType &);                   // DO NOT IMPLEMENT
   const OpaqueType &operator=(const OpaqueType &);  // DO NOT IMPLEMENT
   OpaqueType(LLVMContext &C);
diff --git a/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h b/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
index d2c547d..c3f1902 100644
--- a/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
+++ b/libclamav/c++/llvm/include/llvm/ExecutionEngine/ExecutionEngine.h
@@ -19,6 +19,7 @@
 #include <map>
 #include <string>
 #include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/StringRef.h"
 #include "llvm/ADT/ValueMap.h"
 #include "llvm/Support/ValueHandle.h"
 #include "llvm/System/Mutex.h"
@@ -36,7 +37,6 @@ class JITEventListener;
 class JITMemoryManager;
 class MachineCodeInfo;
 class Module;
-class ModuleProvider;
 class MutexGuard;
 class TargetData;
 class Type;
@@ -95,9 +95,9 @@ class ExecutionEngine {
   friend class EngineBuilder;  // To allow access to JITCtor and InterpCtor.
 
 protected:
-  /// Modules - This is a list of ModuleProvider's that we are JIT'ing from.  We
-  /// use a smallvector to optimize for the case where there is only one module.
-  SmallVector<ModuleProvider*, 1> Modules;
+  /// Modules - This is a list of Modules that we are JIT'ing from.  We use a
+  /// smallvector to optimize for the case where there is only one module.
+  SmallVector<Module*, 1> Modules;
   
   void setTargetData(const TargetData *td) {
     TD = td;
@@ -109,13 +109,17 @@ protected:
   // To avoid having libexecutionengine depend on the JIT and interpreter
   // libraries, the JIT and Interpreter set these functions to ctor pointers
   // at startup time if they are linked in.
-  static ExecutionEngine *(*JITCtor)(ModuleProvider *MP,
-                                     std::string *ErrorStr,
-                                     JITMemoryManager *JMM,
-                                     CodeGenOpt::Level OptLevel,
-                                     bool GVsWithCode,
-				     CodeModel::Model CMM);
-  static ExecutionEngine *(*InterpCtor)(ModuleProvider *MP,
+  static ExecutionEngine *(*JITCtor)(
+    Module *M,
+    std::string *ErrorStr,
+    JITMemoryManager *JMM,
+    CodeGenOpt::Level OptLevel,
+    bool GVsWithCode,
+    CodeModel::Model CMM,
+    StringRef MArch,
+    StringRef MCPU,
+    const SmallVectorImpl<std::string>& MAttrs);
+  static ExecutionEngine *(*InterpCtor)(Module *M,
                                         std::string *ErrorStr);
 
   /// LazyFunctionCreator - If an unknown function is needed, this function
@@ -141,8 +145,8 @@ public:
 
   /// create - This is the factory method for creating an execution engine which
   /// is appropriate for the current machine.  This takes ownership of the
-  /// module provider.
-  static ExecutionEngine *create(ModuleProvider *MP,
+  /// module.
+  static ExecutionEngine *create(Module *M,
                                  bool ForceInterpreter = false,
                                  std::string *ErrorStr = 0,
                                  CodeGenOpt::Level OptLevel =
@@ -158,18 +162,13 @@ public:
                                  // default freeMachineCodeForFunction works.
                                  bool GVsWithCode = true);
 
-  /// create - This is the factory method for creating an execution engine which
-  /// is appropriate for the current machine.  This takes ownership of the
-  /// module.
-  static ExecutionEngine *create(Module *M);
-
   /// createJIT - This is the factory method for creating a JIT for the current
   /// machine, it does not fall back to the interpreter.  This takes ownership
-  /// of the ModuleProvider and JITMemoryManager if successful.
+  /// of the Module and JITMemoryManager if successful.
   ///
   /// Clients should make sure to initialize targets prior to calling this
   /// function.
-  static ExecutionEngine *createJIT(ModuleProvider *MP,
+  static ExecutionEngine *createJIT(Module *M,
                                     std::string *ErrorStr = 0,
                                     JITMemoryManager *JMM = 0,
                                     CodeGenOpt::Level OptLevel =
@@ -178,11 +177,11 @@ public:
 				    CodeModel::Model CMM =
 				      CodeModel::Default);
 
-  /// addModuleProvider - Add a ModuleProvider to the list of modules that we
-  /// can JIT from.  Note that this takes ownership of the ModuleProvider: when
-  /// the ExecutionEngine is destroyed, it destroys the MP as well.
-  virtual void addModuleProvider(ModuleProvider *P) {
-    Modules.push_back(P);
+  /// addModule - Add a Module to the list of modules that we can JIT from.
+  /// Note that this takes ownership of the Module: when the ExecutionEngine is
+  /// destroyed, it destroys the Module as well.
+  virtual void addModule(Module *M) {
+    Modules.push_back(M);
   }
   
   //===----------------------------------------------------------------------===//
@@ -190,16 +189,9 @@ public:
   const TargetData *getTargetData() const { return TD; }
 
 
-  /// removeModuleProvider - Remove a ModuleProvider from the list of modules.
-  /// Relases the Module from the ModuleProvider, materializing it in the
-  /// process, and returns the materialized Module.
-  virtual Module* removeModuleProvider(ModuleProvider *P,
-                                       std::string *ErrInfo = 0);
-
-  /// deleteModuleProvider - Remove a ModuleProvider from the list of modules,
-  /// and deletes the ModuleProvider and owned Module.  Avoids materializing 
-  /// the underlying module.
-  virtual void deleteModuleProvider(ModuleProvider *P,std::string *ErrInfo = 0);
+  /// removeModule - Remove a Module from the list of modules.  Returns true if
+  /// M is found.
+  virtual bool removeModule(Module *M);
 
   /// FindFunctionNamed - Search all of the active modules to find the one that
   /// defines FnName.  This is very slow operation and shouldn't be used for
@@ -393,7 +385,7 @@ public:
   }
 
 protected:
-  explicit ExecutionEngine(ModuleProvider *P);
+  explicit ExecutionEngine(Module *M);
 
   void emitGlobals();
 
@@ -422,13 +414,16 @@ namespace EngineKind {
 class EngineBuilder {
 
  private:
-  ModuleProvider *MP;
+  Module *M;
   EngineKind::Kind WhichEngine;
   std::string *ErrorStr;
   CodeGenOpt::Level OptLevel;
   JITMemoryManager *JMM;
   bool AllocateGVsWithCode;
   CodeModel::Model CMModel;
+  std::string MArch;
+  std::string MCPU;
+  SmallVector<std::string, 4> MAttrs;
 
   /// InitEngine - Does the common initialization of default options.
   ///
@@ -443,16 +438,11 @@ class EngineBuilder {
 
  public:
   /// EngineBuilder - Constructor for EngineBuilder.  If create() is called and
-  /// is successful, the created engine takes ownership of the module
-  /// provider.
-  EngineBuilder(ModuleProvider *mp) : MP(mp) {
+  /// is successful, the created engine takes ownership of the module.
+  EngineBuilder(Module *m) : M(m) {
     InitEngine();
   }
 
-  /// EngineBuilder - Overloaded constructor that automatically creates an
-  /// ExistingModuleProvider for an existing module.
-  EngineBuilder(Module *m);
-
   /// setEngineKind - Controls whether the user wants the interpreter, the JIT,
   /// or whichever engine works.  This option defaults to EngineKind::Either.
   EngineBuilder &setEngineKind(EngineKind::Kind w) {
@@ -502,6 +492,26 @@ class EngineBuilder {
     return *this;
   }
 
+  /// setMArch - Override the architecture set by the Module's triple.
+  EngineBuilder &setMArch(StringRef march) {
+    MArch.assign(march.begin(), march.end());
+    return *this;
+  }
+
+  /// setMCPU - Target a specific cpu type.
+  EngineBuilder &setMCPU(StringRef mcpu) {
+    MCPU.assign(mcpu.begin(), mcpu.end());
+    return *this;
+  }
+
+  /// setMAttrs - Set cpu-specific attributes.
+  template<typename StringSequence>
+  EngineBuilder &setMAttrs(const StringSequence &mattrs) {
+    MAttrs.clear();
+    MAttrs.append(mattrs.begin(), mattrs.end());
+    return *this;
+  }
+
   ExecutionEngine *create();
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/GVMaterializer.h b/libclamav/c++/llvm/include/llvm/GVMaterializer.h
new file mode 100644
index 0000000..c143552
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/GVMaterializer.h
@@ -0,0 +1,66 @@
+//===-- llvm/GVMaterializer.h - Interface for GV materializers --*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file provides an abstract interface for loading a module from some
+// place.  This interface allows incremental or random access loading of
+// functions from the file.  This is useful for applications like JIT compilers
+// or interprocedural optimizers that do not need the entire program in memory
+// at the same time.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef GVMATERIALIZER_H
+#define GVMATERIALIZER_H
+
+#include <string>
+
+namespace llvm {
+
+class Function;
+class GlobalValue;
+class Module;
+
+class GVMaterializer {
+protected:
+  GVMaterializer() {}
+
+public:
+  virtual ~GVMaterializer();
+
+  /// isMaterializable - True if GV can be materialized from whatever backing
+  /// store this GVMaterializer uses and has not been materialized yet.
+  virtual bool isMaterializable(const GlobalValue *GV) const = 0;
+
+  /// isDematerializable - True if GV has been materialized and can be
+  /// dematerialized back to whatever backing store this GVMaterializer uses.
+  virtual bool isDematerializable(const GlobalValue *GV) const = 0;
+
+  /// Materialize - make sure the given GlobalValue is fully read.  If the
+  /// module is corrupt, this returns true and fills in the optional string with
+  /// information about the problem.  If successful, this returns false.
+  ///
+  virtual bool Materialize(GlobalValue *GV, std::string *ErrInfo = 0) = 0;
+
+  /// Dematerialize - If the given GlobalValue is read in, and if the
+  /// GVMaterializer supports it, release the memory for the GV, and set it up
+  /// to be materialized lazily.  If the Materializer doesn't support this
+  /// capability, this method is a noop.
+  ///
+  virtual void Dematerialize(GlobalValue *) {}
+
+  /// MaterializeModule - make sure the entire Module has been completely read.
+  /// On error, this returns true and fills in the optional string with
+  /// information about the problem.  If successful, this returns false.
+  ///
+  virtual bool MaterializeModule(Module *M, std::string *ErrInfo = 0) = 0;
+};
+
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/GlobalValue.h b/libclamav/c++/llvm/include/llvm/GlobalValue.h
index 9875a83..c15b555 100644
--- a/libclamav/c++/llvm/include/llvm/GlobalValue.h
+++ b/libclamav/c++/llvm/include/llvm/GlobalValue.h
@@ -43,7 +43,6 @@ public:
     DLLImportLinkage,   ///< Function to be imported from DLL
     DLLExportLinkage,   ///< Function to be accessible from DLL.
     ExternalWeakLinkage,///< ExternalWeak linkage description.
-    GhostLinkage,       ///< Stand-in functions for streaming fns from BC files.
     CommonLinkage       ///< Tentative definitions.
   };
 
@@ -93,7 +92,7 @@ public:
   void setSection(StringRef S) { Section = S; }
   
   /// If the usage is empty (except transitively dead constants), then this
-  /// global value can can be safely deleted since the destructor will
+  /// global value can be safely deleted since the destructor will
   /// delete the dead constants as well.
   /// @brief Determine if the usage of this global value is empty except
   /// for transitively dead constants.
@@ -132,7 +131,6 @@ public:
   bool hasDLLImportLinkage() const { return Linkage == DLLImportLinkage; }
   bool hasDLLExportLinkage() const { return Linkage == DLLExportLinkage; }
   bool hasExternalWeakLinkage() const { return Linkage == ExternalWeakLinkage; }
-  bool hasGhostLinkage() const { return Linkage == GhostLinkage; }
   bool hasCommonLinkage() const { return Linkage == CommonLinkage; }
 
   void setLinkage(LinkageTypes LT) { Linkage = LT; }
@@ -164,12 +162,33 @@ public:
   /// create a GlobalValue) from the GlobalValue Src to this one.
   virtual void copyAttributesFrom(const GlobalValue *Src);
 
-  /// hasNotBeenReadFromBitcode - If a module provider is being used to lazily
-  /// stream in functions from disk, this method can be used to check to see if
-  /// the function has been read in yet or not.  Unless you are working on the
-  /// JIT or something else that streams stuff in lazily, you don't need to
-  /// worry about this.
-  bool hasNotBeenReadFromBitcode() const { return Linkage == GhostLinkage; }
+/// @name Materialization
+/// Materialization is used to construct functions only as they're needed. This
+/// is useful to reduce memory usage in LLVM or parsing work done by the
+/// BitcodeReader to load the Module.
+/// @{
+
+  /// isMaterializable - If this function's Module is being lazily streamed in
+  /// functions from disk or some other source, this method can be used to check
+  /// to see if the function has been read in yet or not.
+  bool isMaterializable() const;
+
+  /// isDematerializable - Returns true if this function was loaded from a
+  /// GVMaterializer that's still attached to its Module and that knows how to
+  /// dematerialize the function.
+  bool isDematerializable() const;
+
+  /// Materialize - make sure this GlobalValue is fully read.  If the module is
+  /// corrupt, this returns true and fills in the optional string with
+  /// information about the problem.  If successful, this returns false.
+  bool Materialize(std::string *ErrInfo = 0);
+
+  /// Dematerialize - If this GlobalValue is read in, and if the GVMaterializer
+  /// supports it, release the memory for the function, and set it up to be
+  /// materialized lazily.  If !isDematerializable(), this method is a noop.
+  void Dematerialize();
+
+/// @}
 
   /// Override from Constant class. No GlobalValue's are null values so this
   /// always returns false.
diff --git a/libclamav/c++/llvm/include/llvm/InlineAsm.h b/libclamav/c++/llvm/include/llvm/InlineAsm.h
index 482e53e..4490ce5 100644
--- a/libclamav/c++/llvm/include/llvm/InlineAsm.h
+++ b/libclamav/c++/llvm/include/llvm/InlineAsm.h
@@ -39,7 +39,7 @@ class InlineAsm : public Value {
   virtual ~InlineAsm();
 public:
 
-  /// InlineAsm::get - Return the the specified uniqued inline asm string.
+  /// InlineAsm::get - Return the specified uniqued inline asm string.
   ///
   static InlineAsm *get(const FunctionType *Ty, StringRef AsmString,
                         StringRef Constraints, bool hasSideEffects,
diff --git a/libclamav/c++/llvm/include/llvm/InstrTypes.h b/libclamav/c++/llvm/include/llvm/InstrTypes.h
index b5cc659..49cdd6a 100644
--- a/libclamav/c++/llvm/include/llvm/InstrTypes.h
+++ b/libclamav/c++/llvm/include/llvm/InstrTypes.h
@@ -299,6 +299,27 @@ public:
     return BO;
   }
 
+  /// CreateNUWMul - Create a Mul operator with the NUW flag set.
+  ///
+  static BinaryOperator *CreateNUWMul(Value *V1, Value *V2,
+                                      const Twine &Name = "") {
+    BinaryOperator *BO = CreateMul(V1, V2, Name);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWMul(Value *V1, Value *V2,
+                                      const Twine &Name, BasicBlock *BB) {
+    BinaryOperator *BO = CreateMul(V1, V2, Name, BB);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+  static BinaryOperator *CreateNUWMul(Value *V1, Value *V2,
+                                      const Twine &Name, Instruction *I) {
+    BinaryOperator *BO = CreateMul(V1, V2, Name, I);
+    BO->setHasNoUnsignedWrap(true);
+    return BO;
+  }
+
   /// CreateExactSDiv - Create an SDiv operator with the exact flag set.
   ///
   static BinaryOperator *CreateExactSDiv(Value *V1, Value *V2,
@@ -334,6 +355,10 @@ public:
                                       Instruction *InsertBefore = 0);
   static BinaryOperator *CreateNSWNeg(Value *Op, const Twine &Name,
                                       BasicBlock *InsertAtEnd);
+  static BinaryOperator *CreateNUWNeg(Value *Op, const Twine &Name = "",
+                                      Instruction *InsertBefore = 0);
+  static BinaryOperator *CreateNUWNeg(Value *Op, const Twine &Name,
+                                      BasicBlock *InsertAtEnd);
   static BinaryOperator *CreateFNeg(Value *Op, const Twine &Name = "",
                                     Instruction *InsertBefore = 0);
   static BinaryOperator *CreateFNeg(Value *Op, const Twine &Name,
@@ -671,36 +696,36 @@ public:
   /// range 32-64 are reserved for ICmpInst. This is necessary to ensure the
   /// predicate values are not overlapping between the classes.
   enum Predicate {
-    // Opcode             U L G E    Intuitive operation
-    FCMP_FALSE =  0,  /// 0 0 0 0    Always false (always folded)
-    FCMP_OEQ   =  1,  /// 0 0 0 1    True if ordered and equal
-    FCMP_OGT   =  2,  /// 0 0 1 0    True if ordered and greater than
-    FCMP_OGE   =  3,  /// 0 0 1 1    True if ordered and greater than or equal
-    FCMP_OLT   =  4,  /// 0 1 0 0    True if ordered and less than
-    FCMP_OLE   =  5,  /// 0 1 0 1    True if ordered and less than or equal
-    FCMP_ONE   =  6,  /// 0 1 1 0    True if ordered and operands are unequal
-    FCMP_ORD   =  7,  /// 0 1 1 1    True if ordered (no nans)
-    FCMP_UNO   =  8,  /// 1 0 0 0    True if unordered: isnan(X) | isnan(Y)
-    FCMP_UEQ   =  9,  /// 1 0 0 1    True if unordered or equal
-    FCMP_UGT   = 10,  /// 1 0 1 0    True if unordered or greater than
-    FCMP_UGE   = 11,  /// 1 0 1 1    True if unordered, greater than, or equal
-    FCMP_ULT   = 12,  /// 1 1 0 0    True if unordered or less than
-    FCMP_ULE   = 13,  /// 1 1 0 1    True if unordered, less than, or equal
-    FCMP_UNE   = 14,  /// 1 1 1 0    True if unordered or not equal
-    FCMP_TRUE  = 15,  /// 1 1 1 1    Always true (always folded)
+    // Opcode              U L G E    Intuitive operation
+    FCMP_FALSE =  0,  ///< 0 0 0 0    Always false (always folded)
+    FCMP_OEQ   =  1,  ///< 0 0 0 1    True if ordered and equal
+    FCMP_OGT   =  2,  ///< 0 0 1 0    True if ordered and greater than
+    FCMP_OGE   =  3,  ///< 0 0 1 1    True if ordered and greater than or equal
+    FCMP_OLT   =  4,  ///< 0 1 0 0    True if ordered and less than
+    FCMP_OLE   =  5,  ///< 0 1 0 1    True if ordered and less than or equal
+    FCMP_ONE   =  6,  ///< 0 1 1 0    True if ordered and operands are unequal
+    FCMP_ORD   =  7,  ///< 0 1 1 1    True if ordered (no nans)
+    FCMP_UNO   =  8,  ///< 1 0 0 0    True if unordered: isnan(X) | isnan(Y)
+    FCMP_UEQ   =  9,  ///< 1 0 0 1    True if unordered or equal
+    FCMP_UGT   = 10,  ///< 1 0 1 0    True if unordered or greater than
+    FCMP_UGE   = 11,  ///< 1 0 1 1    True if unordered, greater than, or equal
+    FCMP_ULT   = 12,  ///< 1 1 0 0    True if unordered or less than
+    FCMP_ULE   = 13,  ///< 1 1 0 1    True if unordered, less than, or equal
+    FCMP_UNE   = 14,  ///< 1 1 1 0    True if unordered or not equal
+    FCMP_TRUE  = 15,  ///< 1 1 1 1    Always true (always folded)
     FIRST_FCMP_PREDICATE = FCMP_FALSE,
     LAST_FCMP_PREDICATE = FCMP_TRUE,
     BAD_FCMP_PREDICATE = FCMP_TRUE + 1,
-    ICMP_EQ    = 32,  /// equal
-    ICMP_NE    = 33,  /// not equal
-    ICMP_UGT   = 34,  /// unsigned greater than
-    ICMP_UGE   = 35,  /// unsigned greater or equal
-    ICMP_ULT   = 36,  /// unsigned less than
-    ICMP_ULE   = 37,  /// unsigned less or equal
-    ICMP_SGT   = 38,  /// signed greater than
-    ICMP_SGE   = 39,  /// signed greater or equal
-    ICMP_SLT   = 40,  /// signed less than
-    ICMP_SLE   = 41,  /// signed less or equal
+    ICMP_EQ    = 32,  ///< equal
+    ICMP_NE    = 33,  ///< not equal
+    ICMP_UGT   = 34,  ///< unsigned greater than
+    ICMP_UGE   = 35,  ///< unsigned greater or equal
+    ICMP_ULT   = 36,  ///< unsigned less than
+    ICMP_ULE   = 37,  ///< unsigned less or equal
+    ICMP_SGT   = 38,  ///< signed greater than
+    ICMP_SGE   = 39,  ///< signed greater or equal
+    ICMP_SLT   = 40,  ///< signed less than
+    ICMP_SLE   = 41,  ///< signed less or equal
     FIRST_ICMP_PREDICATE = ICMP_EQ,
     LAST_ICMP_PREDICATE = ICMP_SLE,
     BAD_ICMP_PREDICATE = ICMP_SLE + 1
diff --git a/libclamav/c++/llvm/include/llvm/Instruction.h b/libclamav/c++/llvm/include/llvm/Instruction.h
index d45da97..cf9dc44 100644
--- a/libclamav/c++/llvm/include/llvm/Instruction.h
+++ b/libclamav/c++/llvm/include/llvm/Instruction.h
@@ -148,7 +148,7 @@ public:
       getAllMetadataImpl(MDs);
   }
   
-  /// setMetadata - Set the metadata of of the specified kind to the specified
+  /// setMetadata - Set the metadata of the specified kind to the specified
   /// node.  This updates/replaces metadata if already present, or removes it if
   /// Node is null.
   void setMetadata(unsigned KindID, MDNode *Node);
diff --git a/libclamav/c++/llvm/include/llvm/Intrinsics.h b/libclamav/c++/llvm/include/llvm/Intrinsics.h
index 8f1b1ae..5cfe551 100644
--- a/libclamav/c++/llvm/include/llvm/Intrinsics.h
+++ b/libclamav/c++/llvm/include/llvm/Intrinsics.h
@@ -63,9 +63,9 @@ namespace Intrinsic {
   /// declaration for an intrinsic, and return it.
   ///
   /// The Tys and numTys parameters are for intrinsics with overloaded types
-  /// (e.g., those using iAny or fAny). For a declaration for an overloaded
-  /// intrinsic, Tys should point to an array of numTys pointers to Type,
-  /// and must provide exactly one type for each overloaded type in the
+  /// (e.g., those using iAny, fAny, vAny, or iPTRAny). For a declaration for an
+  /// overloaded intrinsic, Tys should point to an array of numTys pointers to
+  /// Type, and must provide exactly one type for each overloaded type in the
   /// intrinsic.
   Function *getDeclaration(Module *M, ID id, const Type **Tys = 0, 
                            unsigned numTys = 0);
diff --git a/libclamav/c++/llvm/include/llvm/Intrinsics.td b/libclamav/c++/llvm/include/llvm/Intrinsics.td
index 684f872..3a0da9c 100644
--- a/libclamav/c++/llvm/include/llvm/Intrinsics.td
+++ b/libclamav/c++/llvm/include/llvm/Intrinsics.td
@@ -309,6 +309,7 @@ let Properties = [IntrNoMem] in {
   def int_eh_sjlj_setjmp  : Intrinsic<[llvm_i32_ty],  [llvm_ptr_ty]>;
   def int_eh_sjlj_longjmp : Intrinsic<[llvm_void_ty], [llvm_ptr_ty]>;
   def int_eh_sjlj_lsda    : Intrinsic<[llvm_ptr_ty]>;
+  def int_eh_sjlj_callsite: Intrinsic<[llvm_void_ty], [llvm_i32_ty]>;
 }
 
 //===---------------- Generic Variable Attribute Intrinsics----------------===//
diff --git a/libclamav/c++/llvm/include/llvm/Linker.h b/libclamav/c++/llvm/include/llvm/Linker.h
index a68a2e0..cc7bf88 100644
--- a/libclamav/c++/llvm/include/llvm/Linker.h
+++ b/libclamav/c++/llvm/include/llvm/Linker.h
@@ -223,7 +223,7 @@ class Linker {
     /// the archive that resolve outstanding symbols will be linked in. The
     /// library is searched repeatedly until no more modules that resolve
     /// symbols can be found. If an error occurs, the error string is  set.
-    /// To speed up this function, ensure the the archive has been processed
+    /// To speed up this function, ensure the archive has been processed
     /// llvm-ranlib or the S option was given to llvm-ar when the archive was
     /// created. These tools add a symbol table to the archive which makes the
     /// search for undefined symbols much faster.
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAsmInfo.h b/libclamav/c++/llvm/include/llvm/MC/MCAsmInfo.h
index c4f15a0..3effea4 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAsmInfo.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAsmInfo.h
@@ -47,17 +47,6 @@ namespace llvm {
     /// emitted in Static relocation model.
     bool HasStaticCtorDtorReferenceInStaticMode;  // Default is false.
     
-    /// NeedsSet - True if target asm treats expressions in data directives
-    /// as linktime-relocatable.  For assembly-time computation, we need to
-    /// use a .set.  Thus:
-    /// .set w, x-y
-    /// .long w
-    /// is computed at assembly time, while
-    /// .long x-y
-    /// is relocated if the relative locations of x and y change at linktime.
-    /// We want both these things in different places.
-    bool NeedsSet;                           // Defaults to false.
-    
     /// MaxInstLength - This is the maximum possible length of an instruction,
     /// which is needed to compute the size of an inline asm.
     unsigned MaxInstLength;                  // Defaults to 4.
@@ -73,7 +62,7 @@ namespace llvm {
 
     /// CommentColumn - This indicates the comment num (zero-based) at
     /// which asm comments should be printed.
-    unsigned CommentColumn;                  // Defaults to 60
+    unsigned CommentColumn;                  // Defaults to 40
 
     /// CommentString - This indicates the comment character used by the
     /// assembler.
@@ -184,13 +173,16 @@ namespace llvm {
     ///
     const char *ExternDirective;             // Defaults to NULL.
     
-    /// SetDirective - This is the name of a directive that can be used to tell
-    /// the assembler to set the value of a variable to some expression.
-    const char *SetDirective;                // Defaults to null.
+    /// HasSetDirective - True if the assembler supports the .set directive.
+    bool HasSetDirective;                    // Defaults to true.
     
     /// HasLCOMMDirective - This is true if the target supports the .lcomm
     /// directive.
-    bool HasLCOMMDirective;              // Defaults to false.
+    bool HasLCOMMDirective;                  // Defaults to false.
+    
+    /// COMMDirectiveAlignmentIsInBytes - True is COMMDirective's optional
+    /// alignment is to be specified in bytes instead of log2(n).
+    bool COMMDirectiveAlignmentIsInBytes;    // Defaults to true;
     
     /// HasDotTypeDotSizeDirective - True if the target has .type and .size
     /// directives, this is true for most ELF targets.
@@ -321,9 +313,6 @@ namespace llvm {
     bool hasStaticCtorDtorReferenceInStaticMode() const {
       return HasStaticCtorDtorReferenceInStaticMode;
     }
-    bool needsSet() const {
-      return NeedsSet;
-    }
     unsigned getMaxInstLength() const {
       return MaxInstLength;
     }
@@ -387,11 +376,12 @@ namespace llvm {
     const char *getExternDirective() const {
       return ExternDirective;
     }
-    const char *getSetDirective() const {
-      return SetDirective;
-    }
+    bool hasSetDirective() const { return HasSetDirective; }
     bool hasLCOMMDirective() const { return HasLCOMMDirective; }
     bool hasDotTypeDotSizeDirective() const {return HasDotTypeDotSizeDirective;}
+    bool getCOMMDirectiveAlignmentIsInBytes() const {
+      return COMMDirectiveAlignmentIsInBytes;
+    }
     bool hasSingleParameterDotFile() const { return HasSingleParameterDotFile; }
     bool hasNoDeadStrip() const { return HasNoDeadStrip; }
     const char *getWeakRefDirective() const { return WeakRefDirective; }
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
index be017bf..4527f3c 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCAssembler.h
@@ -14,6 +14,7 @@
 #include "llvm/ADT/ilist.h"
 #include "llvm/ADT/ilist_node.h"
 #include "llvm/Support/Casting.h"
+#include "llvm/MC/MCFixup.h"
 #include "llvm/System/DataTypes.h"
 #include <vector> // FIXME: Shouldn't be needed.
 
@@ -22,10 +23,34 @@ class raw_ostream;
 class MCAssembler;
 class MCContext;
 class MCExpr;
+class MCFragment;
 class MCSection;
 class MCSectionData;
 class MCSymbol;
 
+/// MCAsmFixup - Represent a fixed size region of bytes inside some fragment
+/// which needs to be rewritten. This region will either be rewritten by the
+/// assembler or cause a relocation entry to be generated.
+struct MCAsmFixup {
+  /// Offset - The offset inside the fragment which needs to be rewritten.
+  uint64_t Offset;
+
+  /// Value - The expression to eventually write into the fragment.
+  const MCExpr *Value;
+
+  /// Kind - The fixup kind.
+  MCFixupKind Kind;
+
+  /// FixedValue - The value to replace the fix up by.
+  //
+  // FIXME: This should not be here.
+  uint64_t FixedValue;
+
+public:
+  MCAsmFixup(uint64_t _Offset, const MCExpr &_Value, MCFixupKind _Kind)
+    : Offset(_Offset), Value(&_Value), Kind(_Kind), FixedValue(0) {}
+};
+
 class MCFragment : public ilist_node<MCFragment> {
   MCFragment(const MCFragment&);     // DO NOT IMPLEMENT
   void operator=(const MCFragment&); // DO NOT IMPLEMENT
@@ -85,7 +110,7 @@ public:
 
   uint64_t getAddress() const;
 
-  uint64_t getFileSize() const { 
+  uint64_t getFileSize() const {
     assert(FileSize != ~UINT64_C(0) && "File size not set!");
     return FileSize;
   }
@@ -103,11 +128,20 @@ public:
   /// @}
 
   static bool classof(const MCFragment *O) { return true; }
+
+  virtual void dump();
 };
 
 class MCDataFragment : public MCFragment {
   SmallString<32> Contents;
 
+  /// Fixups - The list of fixups in this fragment.
+  std::vector<MCAsmFixup> Fixups;
+
+public:
+  typedef std::vector<MCAsmFixup>::const_iterator const_fixup_iterator;
+  typedef std::vector<MCAsmFixup>::iterator fixup_iterator;
+
 public:
   MCDataFragment(MCSectionData *SD = 0) : MCFragment(FT_Data, SD) {}
 
@@ -123,10 +157,28 @@ public:
 
   /// @}
 
-  static bool classof(const MCFragment *F) { 
-    return F->getKind() == MCFragment::FT_Data; 
+  /// @name Fixup Access
+  /// @{
+
+  std::vector<MCAsmFixup> &getFixups() { return Fixups; }
+  const std::vector<MCAsmFixup> &getFixups() const { return Fixups; }
+
+  fixup_iterator fixup_begin() { return Fixups.begin(); }
+  const_fixup_iterator fixup_begin() const { return Fixups.begin(); }
+
+  fixup_iterator fixup_end() {return Fixups.end();}
+  const_fixup_iterator fixup_end() const {return Fixups.end();}
+
+  size_t fixup_size() const { return Fixups.size(); }
+
+  /// @}
+
+  static bool classof(const MCFragment *F) {
+    return F->getKind() == MCFragment::FT_Data;
   }
   static bool classof(const MCDataFragment *) { return true; }
+
+  virtual void dump();
 };
 
 class MCAlignFragment : public MCFragment {
@@ -158,7 +210,7 @@ public:
   }
 
   unsigned getAlignment() const { return Alignment; }
-  
+
   int64_t getValue() const { return Value; }
 
   unsigned getValueSize() const { return ValueSize; }
@@ -167,15 +219,17 @@ public:
 
   /// @}
 
-  static bool classof(const MCFragment *F) { 
-    return F->getKind() == MCFragment::FT_Align; 
+  static bool classof(const MCFragment *F) {
+    return F->getKind() == MCFragment::FT_Align;
   }
   static bool classof(const MCAlignFragment *) { return true; }
+
+  virtual void dump();
 };
 
 class MCFillFragment : public MCFragment {
   /// Value - Value to use for filling bytes.
-  const MCExpr *Value;
+  int64_t Value;
 
   /// ValueSize - The size (in bytes) of \arg Value to use when filling.
   unsigned ValueSize;
@@ -184,10 +238,10 @@ class MCFillFragment : public MCFragment {
   uint64_t Count;
 
 public:
-  MCFillFragment(const MCExpr &_Value, unsigned _ValueSize, uint64_t _Count,
-                 MCSectionData *SD = 0) 
+  MCFillFragment(int64_t _Value, unsigned _ValueSize, uint64_t _Count,
+                 MCSectionData *SD = 0)
     : MCFragment(FT_Fill, SD),
-      Value(&_Value), ValueSize(_ValueSize), Count(_Count) {}
+      Value(_Value), ValueSize(_ValueSize), Count(_Count) {}
 
   /// @name Accessors
   /// @{
@@ -196,25 +250,27 @@ public:
     return ValueSize * Count;
   }
 
-  const MCExpr &getValue() const { return *Value; }
-  
+  int64_t getValue() const { return Value; }
+
   unsigned getValueSize() const { return ValueSize; }
 
   uint64_t getCount() const { return Count; }
 
   /// @}
 
-  static bool classof(const MCFragment *F) { 
-    return F->getKind() == MCFragment::FT_Fill; 
+  static bool classof(const MCFragment *F) {
+    return F->getKind() == MCFragment::FT_Fill;
   }
   static bool classof(const MCFillFragment *) { return true; }
+
+  virtual void dump();
 };
 
 class MCOrgFragment : public MCFragment {
   /// Offset - The offset this fragment should start at.
   const MCExpr *Offset;
 
-  /// Value - Value to use for filling bytes.  
+  /// Value - Value to use for filling bytes.
   int8_t Value;
 
 public:
@@ -231,15 +287,17 @@ public:
   }
 
   const MCExpr &getOffset() const { return *Offset; }
-  
+
   uint8_t getValue() const { return Value; }
 
   /// @}
 
-  static bool classof(const MCFragment *F) { 
-    return F->getKind() == MCFragment::FT_Org; 
+  static bool classof(const MCFragment *F) {
+    return F->getKind() == MCFragment::FT_Org;
   }
   static bool classof(const MCOrgFragment *) { return true; }
+
+  virtual void dump();
 };
 
 /// MCZeroFillFragment - Represent data which has a fixed size and alignment,
@@ -265,15 +323,17 @@ public:
   }
 
   uint64_t getSize() const { return Size; }
-  
+
   unsigned getAlignment() const { return Alignment; }
 
   /// @}
 
-  static bool classof(const MCFragment *F) { 
-    return F->getKind() == MCFragment::FT_ZeroFill; 
+  static bool classof(const MCFragment *F) {
+    return F->getKind() == MCFragment::FT_ZeroFill;
   }
   static bool classof(const MCZeroFillFragment *) { return true; }
+
+  virtual void dump();
 };
 
 // FIXME: Should this be a separate class, or just merged into MCSection? Since
@@ -284,41 +344,13 @@ class MCSectionData : public ilist_node<MCSectionData> {
   void operator=(const MCSectionData&); // DO NOT IMPLEMENT
 
 public:
-  /// Fixup - Represent a fixed size region of bytes inside some fragment which
-  /// needs to be rewritten. This region will either be rewritten by the
-  /// assembler or cause a relocation entry to be generated.
-  struct Fixup {
-    /// Fragment - The fragment containing the fixup.
-    MCFragment *Fragment;
-    
-    /// Offset - The offset inside the fragment which needs to be rewritten.
-    uint64_t Offset;
-
-    /// Value - The expression to eventually write into the fragment.
-    const MCExpr *Value;
-
-    /// Size - The fixup size.
-    unsigned Size;
-
-    /// FixedValue - The value to replace the fix up by.
-    //
-    // FIXME: This should not be here.
-    uint64_t FixedValue;
-
-  public:
-    Fixup(MCFragment &_Fragment, uint64_t _Offset, const MCExpr &_Value,
-          unsigned _Size) 
-      : Fragment(&_Fragment), Offset(_Offset), Value(&_Value), Size(_Size),
-        FixedValue(0) {}
-  };
-
   typedef iplist<MCFragment> FragmentListType;
 
   typedef FragmentListType::const_iterator const_iterator;
   typedef FragmentListType::iterator iterator;
 
-  typedef std::vector<Fixup>::const_iterator const_fixup_iterator;
-  typedef std::vector<Fixup>::iterator fixup_iterator;
+  typedef FragmentListType::const_reverse_iterator const_reverse_iterator;
+  typedef FragmentListType::reverse_iterator reverse_iterator;
 
 private:
   iplist<MCFragment> Fragments;
@@ -343,15 +375,13 @@ private:
   /// initialized.
   uint64_t FileSize;
 
-  /// LastFixupLookup - Cache for the last looked up fixup.
-  mutable unsigned LastFixupLookup;
+  /// HasInstructions - Whether this section has had instructions emitted into
+  /// it.
+  unsigned HasInstructions : 1;
 
-  /// Fixups - The list of fixups in this section.
-  std::vector<Fixup> Fixups;
-  
   /// @}
 
-public:    
+public:
   // Only for use as sentinel.
   MCSectionData();
   MCSectionData(const MCSection &Section, MCAssembler *A = 0);
@@ -373,27 +403,15 @@ public:
   iterator end() { return Fragments.end(); }
   const_iterator end() const { return Fragments.end(); }
 
-  size_t size() const { return Fragments.size(); }
+  reverse_iterator rbegin() { return Fragments.rbegin(); }
+  const_reverse_iterator rbegin() const { return Fragments.rbegin(); }
 
-  bool empty() const { return Fragments.empty(); }
+  reverse_iterator rend() { return Fragments.rend(); }
+  const_reverse_iterator rend() const { return Fragments.rend(); }
 
-  /// @}
-  /// @name Fixup Access
-  /// @{
-
-  std::vector<Fixup> &getFixups() {
-    return Fixups;
-  }
-
-  fixup_iterator fixup_begin() {
-    return Fixups.begin();
-  }
-
-  fixup_iterator fixup_end() {
-    return Fixups.end();
-  }
+  size_t size() const { return Fragments.size(); }
 
-  size_t fixup_size() const { return Fixups.size(); }
+  bool empty() const { return Fragments.empty(); }
 
   /// @}
   /// @name Assembler Backend Support
@@ -401,35 +419,30 @@ public:
   //
   // FIXME: This could all be kept private to the assembler implementation.
 
-  /// LookupFixup - Look up the fixup for the given \arg Fragment and \arg
-  /// Offset.
-  ///
-  /// If multiple fixups exist for the same fragment and offset it is undefined
-  /// which one is returned.
-  //
-  // FIXME: This isn't horribly slow in practice, but there are much nicer
-  // solutions to applying the fixups.
-  const Fixup *LookupFixup(const MCFragment *Fragment, uint64_t Offset) const;
-
-  uint64_t getAddress() const { 
+  uint64_t getAddress() const {
     assert(Address != ~UINT64_C(0) && "Address not set!");
     return Address;
   }
   void setAddress(uint64_t Value) { Address = Value; }
 
-  uint64_t getSize() const { 
+  uint64_t getSize() const {
     assert(Size != ~UINT64_C(0) && "File size not set!");
     return Size;
   }
   void setSize(uint64_t Value) { Size = Value; }
 
-  uint64_t getFileSize() const { 
+  uint64_t getFileSize() const {
     assert(FileSize != ~UINT64_C(0) && "File size not set!");
     return FileSize;
   }
-  void setFileSize(uint64_t Value) { FileSize = Value; }  
+  void setFileSize(uint64_t Value) { FileSize = Value; }
+
+  bool hasInstructions() const { return HasInstructions; }
+  void setHasInstructions(bool Value) { HasInstructions = Value; }
 
   /// @}
+
+  void dump();
 };
 
 // FIXME: Same concerns as with SectionData.
@@ -443,7 +456,7 @@ public:
   /// Offset - The offset to apply to the fragment address to form this symbol's
   /// value.
   uint64_t Offset;
-    
+
   /// IsExternal - True if this symbol is visible outside this translation
   /// unit.
   unsigned IsExternal : 1;
@@ -489,10 +502,10 @@ public:
   /// @}
   /// @name Symbol Attributes
   /// @{
-  
+
   bool isExternal() const { return IsExternal; }
   void setExternal(bool Value) { IsExternal = Value; }
-  
+
   bool isPrivateExtern() const { return IsPrivateExtern; }
   void setPrivateExtern(bool Value) { IsPrivateExtern = Value; }
 
@@ -525,14 +538,16 @@ public:
 
   /// setFlags - Set the (implementation defined) symbol flags.
   void setFlags(uint32_t Value) { Flags = Value; }
-  
+
   /// getIndex - Get the (implementation defined) index.
   uint64_t getIndex() const { return Index; }
 
   /// setIndex - Set the (implementation defined) index.
   void setIndex(uint64_t Value) { Index = Value; }
-  
-  /// @}  
+
+  /// @}
+
+  void dump();
 };
 
 // FIXME: This really doesn't belong here. See comments below.
@@ -561,7 +576,7 @@ private:
   MCContext &Context;
 
   raw_ostream &OS;
-  
+
   iplist<MCSectionData> Sections;
 
   iplist<MCSymbolData> Symbols;
@@ -605,7 +620,7 @@ public:
   /// @{
 
   const SectionDataListType &getSectionList() const { return Sections; }
-  SectionDataListType &getSectionList() { return Sections; }  
+  SectionDataListType &getSectionList() { return Sections; }
 
   iterator begin() { return Sections.begin(); }
   const_iterator begin() const { return Sections.begin(); }
@@ -652,6 +667,8 @@ public:
   size_t indirect_symbol_size() const { return IndirectSymbols.size(); }
 
   /// @}
+
+  void dump();
 };
 
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCCodeEmitter.h b/libclamav/c++/llvm/include/llvm/MC/MCCodeEmitter.h
index ad42dc2..fe1aff4 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCCodeEmitter.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCCodeEmitter.h
@@ -10,23 +10,60 @@
 #ifndef LLVM_MC_MCCODEEMITTER_H
 #define LLVM_MC_MCCODEEMITTER_H
 
+#include "llvm/MC/MCFixup.h"
+
+#include <cassert>
+
 namespace llvm {
+class MCExpr;
 class MCInst;
 class raw_ostream;
+template<typename T> class SmallVectorImpl;
+
+/// MCFixupKindInfo - Target independent information on a fixup kind.
+struct MCFixupKindInfo {
+  /// A target specific name for the fixup kind. The names will be unique for
+  /// distinct kinds on any given target.
+  const char *Name;
+
+  /// The bit offset to write the relocation into.
+  //
+  // FIXME: These two fields are under-specified and not general enough, but it
+  // is covers many things, and is enough to let the AsmStreamer pretty-print
+  // the encoding.
+  unsigned TargetOffset;
+
+  /// The number of bits written by this fixup. The bits are assumed to be
+  /// contiguous.
+  unsigned TargetSize;
+};
 
 /// MCCodeEmitter - Generic instruction encoding interface.
 class MCCodeEmitter {
+private:
   MCCodeEmitter(const MCCodeEmitter &);   // DO NOT IMPLEMENT
   void operator=(const MCCodeEmitter &);  // DO NOT IMPLEMENT
 protected: // Can only create subclasses.
   MCCodeEmitter();
- 
+
 public:
   virtual ~MCCodeEmitter();
 
+  /// @name Target Independent Fixup Information
+  /// @{
+
+  /// getNumFixupKinds - Get the number of target specific fixup kinds.
+  virtual unsigned getNumFixupKinds() const = 0;
+
+  /// getFixupKindInfo - Get information on a fixup kind.
+  virtual const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const;
+
+  /// @}
+
   /// EncodeInstruction - Encode the given \arg Inst to bytes on the output
   /// stream \arg OS.
-  virtual void EncodeInstruction(const MCInst &Inst, raw_ostream &OS) const = 0;
+  virtual void EncodeInstruction(const MCInst &Inst, raw_ostream &OS,
+                                 SmallVectorImpl<MCFixup> &Fixups) const = 0;
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCDirectives.h b/libclamav/c++/llvm/include/llvm/MC/MCDirectives.h
index 609a9a4..1f7364d 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCDirectives.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCDirectives.h
@@ -17,32 +17,32 @@
 namespace llvm {
 
 enum MCSymbolAttr {
-  MCSA_Invalid = 0,    /// Not a valid directive.
+  MCSA_Invalid = 0,    ///< Not a valid directive.
 
   // Various directives in alphabetical order.
-  MCSA_ELF_TypeFunction,    /// .type _foo, STT_FUNC  # aka @function
-  MCSA_ELF_TypeIndFunction, /// .type _foo, STT_GNU_IFUNC
-  MCSA_ELF_TypeObject,      /// .type _foo, STT_OBJECT  # aka @object
-  MCSA_ELF_TypeTLS,         /// .type _foo, STT_TLS     # aka @tls_object
-  MCSA_ELF_TypeCommon,      /// .type _foo, STT_COMMON  # aka @common
-  MCSA_ELF_TypeNoType,      /// .type _foo, STT_NOTYPE  # aka @notype
-  MCSA_Global,              /// .globl
-  MCSA_Hidden,              /// .hidden (ELF)
-  MCSA_IndirectSymbol,      /// .indirect_symbol (MachO)
-  MCSA_Internal,            /// .internal (ELF)
-  MCSA_LazyReference,       /// .lazy_reference (MachO)
-  MCSA_Local,               /// .local (ELF)
-  MCSA_NoDeadStrip,         /// .no_dead_strip (MachO)
-  MCSA_PrivateExtern,       /// .private_extern (MachO)
-  MCSA_Protected,           /// .protected (ELF)
-  MCSA_Reference,           /// .reference (MachO)
-  MCSA_Weak,                /// .weak
-  MCSA_WeakDefinition,      /// .weak_definition (MachO)
-  MCSA_WeakReference        /// .weak_reference (MachO)
+  MCSA_ELF_TypeFunction,    ///< .type _foo, STT_FUNC  # aka @function
+  MCSA_ELF_TypeIndFunction, ///< .type _foo, STT_GNU_IFUNC
+  MCSA_ELF_TypeObject,      ///< .type _foo, STT_OBJECT  # aka @object
+  MCSA_ELF_TypeTLS,         ///< .type _foo, STT_TLS     # aka @tls_object
+  MCSA_ELF_TypeCommon,      ///< .type _foo, STT_COMMON  # aka @common
+  MCSA_ELF_TypeNoType,      ///< .type _foo, STT_NOTYPE  # aka @notype
+  MCSA_Global,              ///< .globl
+  MCSA_Hidden,              ///< .hidden (ELF)
+  MCSA_IndirectSymbol,      ///< .indirect_symbol (MachO)
+  MCSA_Internal,            ///< .internal (ELF)
+  MCSA_LazyReference,       ///< .lazy_reference (MachO)
+  MCSA_Local,               ///< .local (ELF)
+  MCSA_NoDeadStrip,         ///< .no_dead_strip (MachO)
+  MCSA_PrivateExtern,       ///< .private_extern (MachO)
+  MCSA_Protected,           ///< .protected (ELF)
+  MCSA_Reference,           ///< .reference (MachO)
+  MCSA_Weak,                ///< .weak
+  MCSA_WeakDefinition,      ///< .weak_definition (MachO)
+  MCSA_WeakReference        ///< .weak_reference (MachO)
 };
 
 enum MCAssemblerFlag {
-  MCAF_SubsectionsViaSymbols  /// .subsections_via_symbols (MachO)
+  MCAF_SubsectionsViaSymbols  ///< .subsections_via_symbols (MachO)
 };
   
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCExpr.h b/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
index 73d5f8e..fce7602 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCExpr.h
@@ -29,7 +29,8 @@ public:
     Binary,    ///< Binary expressions.
     Constant,  ///< Constant expressions.
     SymbolRef, ///< References to labels and assigned expressions.
-    Unary      ///< Unary expressions.
+    Unary,     ///< Unary expressions.
+    Target     ///< Target specific expression.
   };
 
 private:
@@ -39,7 +40,7 @@ private:
   void operator=(const MCExpr&); // DO NOT IMPLEMENT
 
 protected:
-  MCExpr(ExprKind _Kind) : Kind(_Kind) {}
+  explicit MCExpr(ExprKind _Kind) : Kind(_Kind) {}
 
 public:
   /// @name Accessors
@@ -85,7 +86,7 @@ inline raw_ostream &operator<<(raw_ostream &OS, const MCExpr &E) {
 class MCConstantExpr : public MCExpr {
   int64_t Value;
 
-  MCConstantExpr(int64_t _Value)
+  explicit MCConstantExpr(int64_t _Value)
     : MCExpr(MCExpr::Constant), Value(_Value) {}
 
 public:
@@ -117,7 +118,7 @@ public:
 class MCSymbolRefExpr : public MCExpr {
   const MCSymbol *Symbol;
 
-  MCSymbolRefExpr(const MCSymbol *_Symbol)
+  explicit MCSymbolRefExpr(const MCSymbol *_Symbol)
     : MCExpr(MCExpr::SymbolRef), Symbol(_Symbol) {}
 
 public:
@@ -201,20 +202,24 @@ public:
   enum Opcode {
     Add,  ///< Addition.
     And,  ///< Bitwise and.
-    Div,  ///< Division.
+    Div,  ///< Signed division.
     EQ,   ///< Equality comparison.
-    GT,   ///< Greater than comparison.
-    GTE,  ///< Greater than or equal comparison.
+    GT,   ///< Signed greater than comparison (result is either 0 or some
+          ///< target-specific non-zero value)
+    GTE,  ///< Signed greater than or equal comparison (result is either 0 or
+          ///< some target-specific non-zero value).
     LAnd, ///< Logical and.
     LOr,  ///< Logical or.
-    LT,   ///< Less than comparison.
-    LTE,  ///< Less than or equal comparison.
-    Mod,  ///< Modulus.
+    LT,   ///< Signed less than comparison (result is either 0 or
+          ///< some target-specific non-zero value).
+    LTE,  ///< Signed less than or equal comparison (result is either 0 or
+          ///< some target-specific non-zero value).
+    Mod,  ///< Signed remainder.
     Mul,  ///< Multiplication.
     NE,   ///< Inequality comparison.
     Or,   ///< Bitwise or.
-    Shl,  ///< Bitwise shift left.
-    Shr,  ///< Bitwise shift right.
+    Shl,  ///< Shift left.
+    Shr,  ///< Shift right (arithmetic or logical, depending on target)
     Sub,  ///< Subtraction.
     Xor   ///< Bitwise exclusive or.
   };
@@ -326,6 +331,28 @@ public:
   static bool classof(const MCBinaryExpr *) { return true; }
 };
 
+/// MCTargetExpr - This is an extension point for target-specific MCExpr
+/// subclasses to implement.
+///
+/// NOTE: All subclasses are required to have trivial destructors because
+/// MCExprs are bump pointer allocated and not destructed.
+class MCTargetExpr : public MCExpr {
+  virtual void Anchor();
+protected:
+  MCTargetExpr() : MCExpr(Target) {}
+  virtual ~MCTargetExpr() {}
+public:
+  
+  virtual void PrintImpl(raw_ostream &OS) const = 0;
+  virtual bool EvaluateAsRelocatableImpl(MCValue &Res) const = 0;
+
+  
+  static bool classof(const MCExpr *E) {
+    return E->getKind() == MCExpr::Target;
+  }
+  static bool classof(const MCTargetExpr *) { return true; }
+};
+
 } // end namespace llvm
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCFixup.h b/libclamav/c++/llvm/include/llvm/MC/MCFixup.h
new file mode 100644
index 0000000..cd0dd19
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/MC/MCFixup.h
@@ -0,0 +1,108 @@
+//===-- llvm/MC/MCFixup.h - Instruction Relocation and Patching -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_MC_MCFIXUP_H
+#define LLVM_MC_MCFIXUP_H
+
+#include <cassert>
+
+namespace llvm {
+class MCExpr;
+
+// Private constants, do not use.
+//
+// This is currently laid out so that the MCFixup fields can be efficiently
+// accessed, while keeping the offset field large enough that the assembler
+// backend can reasonably use the MCFixup representation for an entire fragment
+// (splitting any overly large fragments).
+//
+// The division of bits between the kind and the opindex can be tweaked if we
+// end up needing more bits for target dependent kinds.
+enum {
+  MCFIXUP_NUM_GENERIC_KINDS = 128,
+  MCFIXUP_NUM_KIND_BITS = 16,
+  MCFIXUP_NUM_OFFSET_BITS = (32 - MCFIXUP_NUM_KIND_BITS)
+};
+
+/// MCFixupKind - Extensible enumeration to represent the type of a fixup.
+enum MCFixupKind {
+  FK_Data_1 = 0, ///< A one-byte fixup.
+  FK_Data_2,     ///< A two-byte fixup.
+  FK_Data_4,     ///< A four-byte fixup.
+  FK_Data_8,     ///< A eight-byte fixup.
+
+  FirstTargetFixupKind = MCFIXUP_NUM_GENERIC_KINDS,
+
+  MaxTargetFixupKind = (1 << MCFIXUP_NUM_KIND_BITS)
+};
+
+/// MCFixup - Encode information on a single operation to perform on an byte
+/// sequence (e.g., an encoded instruction) which requires assemble- or run-
+/// time patching.
+///
+/// Fixups are used any time the target instruction encoder needs to represent
+/// some value in an instruction which is not yet concrete. The encoder will
+/// encode the instruction assuming the value is 0, and emit a fixup which
+/// communicates to the assembler backend how it should rewrite the encoded
+/// value.
+///
+/// During the process of relaxation, the assembler will apply fixups as
+/// symbolic values become concrete. When relaxation is complete, any remaining
+/// fixups become relocations in the object file (or errors, if the fixup cannot
+/// be encoded on the target).
+class MCFixup {
+  static const unsigned MaxOffset = 1 << MCFIXUP_NUM_KIND_BITS;
+
+  /// The value to put into the fixup location. The exact interpretation of the
+  /// expression is target dependent, usually it will one of the operands to an
+  /// instruction or an assembler directive.
+  const MCExpr *Value;
+
+  /// The byte index of start of the relocation inside the encoded instruction.
+  unsigned Offset : MCFIXUP_NUM_OFFSET_BITS;
+
+  /// The target dependent kind of fixup item this is. The kind is used to
+  /// determine how the operand value should be encoded into the instruction.
+  unsigned Kind : MCFIXUP_NUM_KIND_BITS;
+
+public:
+  static MCFixup Create(unsigned Offset, const MCExpr *Value,
+                        MCFixupKind Kind) {
+    MCFixup FI;
+    FI.Value = Value;
+    FI.Offset = Offset;
+    FI.Kind = unsigned(Kind);
+
+    assert(Offset == FI.getOffset() && "Offset out of range!");
+    assert(Kind == FI.getKind() && "Kind out of range!");
+    return FI;
+  }
+
+  MCFixupKind getKind() const { return MCFixupKind(Kind); }
+
+  unsigned getOffset() const { return Offset; }
+
+  const MCExpr *getValue() const { return Value; }
+
+  /// getKindForSize - Return the generic fixup kind for a value with the given
+  /// size. It is an error to pass an unsupported size.
+  static MCFixupKind getKindForSize(unsigned Size) {
+    switch (Size) {
+    default: assert(0 && "Invalid generic fixup size!");
+    case 1: return FK_Data_1;
+    case 2: return FK_Data_2;
+    case 4: return FK_Data_4;
+    case 8: return FK_Data_8;
+    }
+  }
+};
+
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCInstPrinter.h b/libclamav/c++/llvm/include/llvm/MC/MCInstPrinter.h
index d62a9da..d2ddc5b 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCInstPrinter.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCInstPrinter.h
@@ -14,22 +14,36 @@ namespace llvm {
 class MCInst;
 class raw_ostream;
 class MCAsmInfo;
+class StringRef;
 
-  
 /// MCInstPrinter - This is an instance of a target assembly language printer
 /// that converts an MCInst to valid target assembly syntax.
 class MCInstPrinter {
 protected:
+  /// O - The main stream to emit instruction text to.
   raw_ostream &O;
+  
+  /// CommentStream - a stream that comments can be emitted to if desired.
+  /// Each comment must end with a newline.  This will be null if verbose
+  /// assembly emission is disable.
+  raw_ostream *CommentStream;
   const MCAsmInfo &MAI;
 public:
-  MCInstPrinter(raw_ostream &o, const MCAsmInfo &mai) : O(o), MAI(mai) {}
+  MCInstPrinter(raw_ostream &o, const MCAsmInfo &mai)
+    : O(o), CommentStream(0), MAI(mai) {}
   
   virtual ~MCInstPrinter();
+
+  /// setCommentStream - Specify a stream to emit comments to.
+  void setCommentStream(raw_ostream &OS) { CommentStream = &OS; }
   
   /// printInst - Print the specified MCInst to the current raw_ostream.
   ///
   virtual void printInst(const MCInst *MI) = 0;
+  
+  /// getOpcodeName - Return the name of the specified opcode enum (e.g.
+  /// "MOV32ri") or empty if we can't resolve it.
+  virtual StringRef getOpcodeName(unsigned Opcode) const;
 };
   
 } // namespace llvm
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmLexer.h b/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmLexer.h
deleted file mode 100644
index 043c363..0000000
--- a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmLexer.h
+++ /dev/null
@@ -1,171 +0,0 @@
-//===-- llvm/MC/MCAsmLexer.h - Abstract Asm Lexer Interface -----*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_MC_MCASMLEXER_H
-#define LLVM_MC_MCASMLEXER_H
-
-#include "llvm/ADT/StringRef.h"
-#include "llvm/System/DataTypes.h"
-#include "llvm/Support/SMLoc.h"
-
-namespace llvm {
-class MCAsmLexer;
-class MCInst;
-class Target;
-
-/// AsmToken - Target independent representation for an assembler token.
-class AsmToken {
-public:
-  enum TokenKind {
-    // Markers
-    Eof, Error,
-
-    // String values.
-    Identifier,
-    String,
-    
-    // Integer values.
-    Integer,
-    
-    // Register values (stored in IntVal).  Only used by TargetAsmLexer.
-    Register,
-    
-    // No-value.
-    EndOfStatement,
-    Colon,
-    Plus, Minus, Tilde,
-    Slash,    // '/'
-    LParen, RParen, LBrac, RBrac, LCurly, RCurly,
-    Star, Comma, Dollar, Equal, EqualEqual,
-    
-    Pipe, PipePipe, Caret, 
-    Amp, AmpAmp, Exclaim, ExclaimEqual, Percent, Hash,
-    Less, LessEqual, LessLess, LessGreater,
-    Greater, GreaterEqual, GreaterGreater
-  };
-
-  TokenKind Kind;
-
-  /// A reference to the entire token contents; this is always a pointer into
-  /// a memory buffer owned by the source manager.
-  StringRef Str;
-
-  int64_t IntVal;
-
-public:
-  AsmToken() {}
-  AsmToken(TokenKind _Kind, StringRef _Str, int64_t _IntVal = 0)
-    : Kind(_Kind), Str(_Str), IntVal(_IntVal) {}
-
-  TokenKind getKind() const { return Kind; }
-  bool is(TokenKind K) const { return Kind == K; }
-  bool isNot(TokenKind K) const { return Kind != K; }
-
-  SMLoc getLoc() const;
-
-  /// getStringContents - Get the contents of a string token (without quotes).
-  StringRef getStringContents() const { 
-    assert(Kind == String && "This token isn't a string!");
-    return Str.slice(1, Str.size() - 1);
-  }
-
-  /// getIdentifier - Get the identifier string for the current token, which
-  /// should be an identifier or a string. This gets the portion of the string
-  /// which should be used as the identifier, e.g., it does not include the
-  /// quotes on strings.
-  StringRef getIdentifier() const {
-    if (Kind == Identifier)
-      return getString();
-    return getStringContents();
-  }
-
-  /// getString - Get the string for the current token, this includes all
-  /// characters (for example, the quotes on strings) in the token.
-  ///
-  /// The returned StringRef points into the source manager's memory buffer, and
-  /// is safe to store across calls to Lex().
-  StringRef getString() const { return Str; }
-
-  // FIXME: Don't compute this in advance, it makes every token larger, and is
-  // also not generally what we want (it is nicer for recovery etc. to lex 123br
-  // as a single token, then diagnose as an invalid number).
-  int64_t getIntVal() const { 
-    assert(Kind == Integer && "This token isn't an integer!");
-    return IntVal; 
-  }
-  
-  /// getRegVal - Get the register number for the current token, which should
-  /// be a register.
-  unsigned getRegVal() const {
-    assert(Kind == Register && "This token isn't a register!");
-    return static_cast<unsigned>(IntVal);
-  }
-};
-
-/// MCAsmLexer - Generic assembler lexer interface, for use by target specific
-/// assembly lexers.
-class MCAsmLexer {
-  /// The current token, stored in the base class for faster access.
-  AsmToken CurTok;
-  
-  /// The location and description of the current error
-  SMLoc ErrLoc;
-  std::string Err;
-
-  MCAsmLexer(const MCAsmLexer &);   // DO NOT IMPLEMENT
-  void operator=(const MCAsmLexer &);  // DO NOT IMPLEMENT
-protected: // Can only create subclasses.
-  MCAsmLexer();
-
-  virtual AsmToken LexToken() = 0;
-  
-  void SetError(const SMLoc &errLoc, const std::string &err) {
-    ErrLoc = errLoc;
-    Err = err;
-  }
-  
-public:
-  virtual ~MCAsmLexer();
-
-  /// Lex - Consume the next token from the input stream and return it.
-  ///
-  /// The lexer will continuosly return the end-of-file token once the end of
-  /// the main input file has been reached.
-  const AsmToken &Lex() {
-    return CurTok = LexToken();
-  }
-
-  /// getTok - Get the current (last) lexed token.
-  const AsmToken &getTok() {
-    return CurTok;
-  }
-  
-  /// getErrLoc - Get the current error location
-  const SMLoc &getErrLoc() {
-    return ErrLoc;
-  }
-           
-  /// getErr - Get the current error string
-  const std::string &getErr() {
-    return Err;
-  }
-
-  /// getKind - Get the kind of current token.
-  AsmToken::TokenKind getKind() const { return CurTok.getKind(); }
-
-  /// is - Check if the current token has kind \arg K.
-  bool is(AsmToken::TokenKind K) const { return CurTok.is(K); }
-
-  /// isNot - Check if the current token has kind \arg K.
-  bool isNot(AsmToken::TokenKind K) const { return CurTok.isNot(K); }
-};
-
-} // End llvm namespace
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmParser.h b/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmParser.h
deleted file mode 100644
index 843c692..0000000
--- a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCAsmParser.h
+++ /dev/null
@@ -1,88 +0,0 @@
-//===-- llvm/MC/MCAsmParser.h - Abstract Asm Parser Interface ---*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_MC_MCASMPARSER_H
-#define LLVM_MC_MCASMPARSER_H
-
-#include "llvm/System/DataTypes.h"
-
-namespace llvm {
-class AsmToken;
-class MCAsmLexer;
-class MCContext;
-class MCExpr;
-class MCStreamer;
-class MCValue;
-class SMLoc;
-class Twine;
-
-/// MCAsmParser - Generic assembler parser interface, for use by target specific
-/// assembly parsers.
-class MCAsmParser {
-  MCAsmParser(const MCAsmParser &);   // DO NOT IMPLEMENT
-  void operator=(const MCAsmParser &);  // DO NOT IMPLEMENT
-protected: // Can only create subclasses.
-  MCAsmParser();
- 
-public:
-  virtual ~MCAsmParser();
-
-  virtual MCAsmLexer &getLexer() = 0;
-
-  virtual MCContext &getContext() = 0;
-
-  /// getSteamer - Return the output streamer for the assembler.
-  virtual MCStreamer &getStreamer() = 0;
-
-  /// Warning - Emit a warning at the location \arg L, with the message \arg
-  /// Msg.
-  virtual void Warning(SMLoc L, const Twine &Msg) = 0;
-
-  /// Warning - Emit an error at the location \arg L, with the message \arg
-  /// Msg.
-  ///
-  /// \return The return value is always true, as an idiomatic convenience to
-  /// clients.
-  virtual bool Error(SMLoc L, const Twine &Msg) = 0;
-
-  /// Lex - Get the next AsmToken in the stream, possibly handling file
-  /// inclusion first.
-  virtual const AsmToken &Lex() = 0;
-  
-  /// getTok - Get the current AsmToken from the stream.
-  const AsmToken &getTok();
-  
-  /// ParseExpression - Parse an arbitrary expression.
-  ///
-  /// @param Res - The value of the expression. The result is undefined
-  /// on error.
-  /// @result - False on success.
-  virtual bool ParseExpression(const MCExpr *&Res, SMLoc &EndLoc) = 0;
-  bool ParseExpression(const MCExpr *&Res);
-  
-  /// ParseParenExpression - Parse an arbitrary expression, assuming that an
-  /// initial '(' has already been consumed.
-  ///
-  /// @param Res - The value of the expression. The result is undefined
-  /// on error.
-  /// @result - False on success.
-  virtual bool ParseParenExpression(const MCExpr *&Res, SMLoc &EndLoc) = 0;
-
-  /// ParseAbsoluteExpression - Parse an expression which must evaluate to an
-  /// absolute value.
-  ///
-  /// @param Res - The value of the absolute expression. The result is undefined
-  /// on error.
-  /// @result - False on success.
-  virtual bool ParseAbsoluteExpression(int64_t &Res) = 0;
-};
-
-} // End llvm namespace
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCParsedAsmOperand.h b/libclamav/c++/llvm/include/llvm/MC/MCParser/MCParsedAsmOperand.h
deleted file mode 100644
index 7c2f5be..0000000
--- a/libclamav/c++/llvm/include/llvm/MC/MCParser/MCParsedAsmOperand.h
+++ /dev/null
@@ -1,33 +0,0 @@
-//===-- llvm/MC/MCParsedAsmOperand.h - Asm Parser Operand -------*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_MC_MCASMOPERAND_H
-#define LLVM_MC_MCASMOPERAND_H
-
-namespace llvm {
-class SMLoc;
-
-/// MCParsedAsmOperand - This abstract class represents a source-level assembly
-/// instruction operand.  It should be subclassed by target-specific code.  This
-/// base class is used by target-independent clients and is the interface
-/// between parsing an asm instruction and recognizing it.
-class MCParsedAsmOperand {
-public:  
-  MCParsedAsmOperand() {}
-  virtual ~MCParsedAsmOperand() {}
-  
-  /// getStartLoc - Get the location of the first token of this operand.
-  virtual SMLoc getStartLoc() const;
-  /// getEndLoc - Get the location of the last token of this operand.
-  virtual SMLoc getEndLoc() const;
-};
-
-} // end namespace llvm.
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h b/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
index 2a2529b..624d9a6 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCStreamer.h
@@ -60,6 +60,10 @@ namespace llvm {
 
     /// @name Assembly File Formatting.
     /// @{
+    
+    /// isVerboseAsm - Return true if this streamer supports verbose assembly at
+    /// all.
+    virtual bool isVerboseAsm() const { return false; }
 
     /// AddComment - Add a comment that can be emitted to the generated .s
     /// file if applicable as a QoI issue to make the output of the compiler
@@ -265,11 +269,21 @@ namespace llvm {
   /// createAsmStreamer - Create a machine code streamer which will print out
   /// assembly for the native target, suitable for compiling with a native
   /// assembler.
+  ///
+  /// \param InstPrint - If given, the instruction printer to use. If not given
+  /// the MCInst representation will be printed.
+  ///
+  /// \param CE - If given, a code emitter to use to show the instruction
+  /// encoding inline with the assembly.
+  ///
+  /// \param ShowInst - Whether to show the MCInst representation inline with
+  /// the assembly.
   MCStreamer *createAsmStreamer(MCContext &Ctx, formatted_raw_ostream &OS,
                                 const MCAsmInfo &MAI, bool isLittleEndian,
                                 bool isVerboseAsm,
                                 MCInstPrinter *InstPrint = 0,
-                                MCCodeEmitter *CE = 0);
+                                MCCodeEmitter *CE = 0,
+                                bool ShowInst = false);
 
   // FIXME: These two may end up getting rolled into a single
   // createObjectStreamer interface, which implements the assembler backend, and
@@ -278,7 +292,7 @@ namespace llvm {
   /// createMachOStream - Create a machine code streamer which will generative
   /// Mach-O format object files.
   MCStreamer *createMachOStreamer(MCContext &Ctx, raw_ostream &OS,
-                                  MCCodeEmitter *CE = 0);
+                                  MCCodeEmitter *CE);
 
   /// createELFStreamer - Create a machine code streamer which will generative
   /// ELF format object files.
diff --git a/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h b/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
index e770604..d5c4d95 100644
--- a/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
+++ b/libclamav/c++/llvm/include/llvm/MC/MCSymbol.h
@@ -89,7 +89,7 @@ namespace llvm {
       return !isDefined();
     }
 
-    /// isAbsolute - Check if this this is an absolute symbol.
+    /// isAbsolute - Check if this is an absolute symbol.
     bool isAbsolute() const {
       return Section == AbsolutePseudoSection;
     }
diff --git a/libclamav/c++/llvm/include/llvm/Module.h b/libclamav/c++/llvm/include/llvm/Module.h
index 8dfb508..901fada 100644
--- a/libclamav/c++/llvm/include/llvm/Module.h
+++ b/libclamav/c++/llvm/include/llvm/Module.h
@@ -19,12 +19,14 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/GlobalAlias.h"
 #include "llvm/Metadata.h"
+#include "llvm/ADT/OwningPtr.h"
 #include "llvm/System/DataTypes.h"
 #include <vector>
 
 namespace llvm {
 
 class FunctionType;
+class GVMaterializer;
 class LLVMContext;
 class MDSymbolTable;
 
@@ -145,6 +147,7 @@ private:
   std::string GlobalScopeAsm;     ///< Inline Asm at global scope.
   ValueSymbolTable *ValSymTab;    ///< Symbol table for values
   TypeSymbolTable *TypeSymTab;    ///< Symbol table for types
+  OwningPtr<GVMaterializer> Materializer;  ///< Used to materialize GlobalValues
   std::string ModuleID;           ///< Human readable identifier for the module
   std::string TargetTriple;       ///< Platform target triple Module compiled on
   std::string DataLayout;         ///< Target data description
@@ -347,6 +350,50 @@ public:
   const Type *getTypeByName(StringRef Name) const;
 
 /// @}
+/// @name Materialization
+/// @{
+
+  /// setMaterializer - Sets the GVMaterializer to GVM.  This module must not
+  /// yet have a Materializer.  To reset the materializer for a module that
+  /// already has one, call MaterializeAllPermanently first.  Destroying this
+  /// module will destroy its materializer without materializing any more
+  /// GlobalValues.  Without destroying the Module, there is no way to detach or
+  /// destroy a materializer without materializing all the GVs it controls, to
+  /// avoid leaving orphan unmaterialized GVs.
+  void setMaterializer(GVMaterializer *GVM);
+  /// getMaterializer - Retrieves the GVMaterializer, if any, for this Module.
+  GVMaterializer *getMaterializer() const { return Materializer.get(); }
+
+  /// isMaterializable - True if the definition of GV has yet to be materialized
+  /// from the GVMaterializer.
+  bool isMaterializable(const GlobalValue *GV) const;
+  /// isDematerializable - Returns true if this GV was loaded from this Module's
+  /// GVMaterializer and the GVMaterializer knows how to dematerialize the GV.
+  bool isDematerializable(const GlobalValue *GV) const;
+
+  /// Materialize - Make sure the GlobalValue is fully read.  If the module is
+  /// corrupt, this returns true and fills in the optional string with
+  /// information about the problem.  If successful, this returns false.
+  bool Materialize(GlobalValue *GV, std::string *ErrInfo = 0);
+  /// Dematerialize - If the GlobalValue is read in, and if the GVMaterializer
+  /// supports it, release the memory for the function, and set it up to be
+  /// materialized lazily.  If !isDematerializable(), this method is a noop.
+  void Dematerialize(GlobalValue *GV);
+
+  /// MaterializeAll - Make sure all GlobalValues in this Module are fully read.
+  /// If the module is corrupt, this returns true and fills in the optional
+  /// string with information about the problem.  If successful, this returns
+  /// false.
+  bool MaterializeAll(std::string *ErrInfo = 0);
+
+  /// MaterializeAllPermanently - Make sure all GlobalValues in this Module are
+  /// fully read and clear the Materializer.  If the module is corrupt, this
+  /// returns true, fills in the optional string with information about the
+  /// problem, and DOES NOT clear the old Materializer.  If successful, this
+  /// returns false.
+  bool MaterializeAllPermanently(std::string *ErrInfo = 0);
+
+/// @}
 /// @name Direct access to the globals list, functions list, and symbol table
 /// @{
 
diff --git a/libclamav/c++/llvm/include/llvm/ModuleProvider.h b/libclamav/c++/llvm/include/llvm/ModuleProvider.h
deleted file mode 100644
index 8a0a20c..0000000
--- a/libclamav/c++/llvm/include/llvm/ModuleProvider.h
+++ /dev/null
@@ -1,88 +0,0 @@
-//===-- llvm/ModuleProvider.h - Interface for module providers --*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file provides an abstract interface for loading a module from some
-// place.  This interface allows incremental or random access loading of
-// functions from the file.  This is useful for applications like JIT compilers
-// or interprocedural optimizers that do not need the entire program in memory
-// at the same time.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef MODULEPROVIDER_H
-#define MODULEPROVIDER_H
-
-#include <string>
-
-namespace llvm {
-
-class Function;
-class Module;
-
-class ModuleProvider {
-protected:
-  Module *TheModule;
-  ModuleProvider();
-
-public:
-  virtual ~ModuleProvider();
-
-  /// getModule - returns the module this provider is encapsulating.
-  ///
-  Module* getModule() { return TheModule; }
-
-  /// materializeFunction - make sure the given function is fully read.  If the
-  /// module is corrupt, this returns true and fills in the optional string
-  /// with information about the problem.  If successful, this returns false.
-  ///
-  virtual bool materializeFunction(Function *F, std::string *ErrInfo = 0) = 0;
-
-  /// dematerializeFunction - If the given function is read in, and if the
-  /// module provider supports it, release the memory for the function, and set
-  /// it up to be materialized lazily.  If the provider doesn't support this
-  /// capability, this method is a noop.
-  ///
-  virtual void dematerializeFunction(Function *) {}
-  
-  /// materializeModule - make sure the entire Module has been completely read.
-  /// On error, return null and fill in the error string if specified.
-  ///
-  virtual Module* materializeModule(std::string *ErrInfo = 0) = 0;
-
-  /// releaseModule - no longer delete the Module* when provider is destroyed.
-  /// On error, return null and fill in the error string if specified.
-  ///
-  virtual Module* releaseModule(std::string *ErrInfo = 0) {
-    // Since we're losing control of this Module, we must hand it back complete
-    if (!materializeModule(ErrInfo))
-      return 0;
-    Module *tempM = TheModule;
-    TheModule = 0;
-    return tempM;
-  }
-};
-
-
-/// ExistingModuleProvider - Allow conversion from a fully materialized Module
-/// into a ModuleProvider, allowing code that expects a ModuleProvider to work
-/// if we just have a Module.  Note that the ModuleProvider takes ownership of
-/// the Module specified.
-struct ExistingModuleProvider : public ModuleProvider {
-  explicit ExistingModuleProvider(Module *M) {
-    TheModule = M;
-  }
-  bool materializeFunction(Function *, std::string * = 0) {
-    return false;
-  }
-  Module* materializeModule(std::string * = 0) { return TheModule; }
-};
-
-} // End llvm namespace
-
-#endif
diff --git a/libclamav/c++/llvm/include/llvm/Pass.h b/libclamav/c++/llvm/include/llvm/Pass.h
index ab08afb..e822a0f 100644
--- a/libclamav/c++/llvm/include/llvm/Pass.h
+++ b/libclamav/c++/llvm/include/llvm/Pass.h
@@ -56,11 +56,11 @@ typedef const PassInfo* AnalysisID;
 /// Ordering of pass manager types is important here.
 enum PassManagerType {
   PMT_Unknown = 0,
-  PMT_ModulePassManager = 1, /// MPPassManager 
-  PMT_CallGraphPassManager,  /// CGPassManager
-  PMT_FunctionPassManager,   /// FPPassManager
-  PMT_LoopPassManager,       /// LPPassManager
-  PMT_BasicBlockPassManager, /// BBPassManager
+  PMT_ModulePassManager = 1, ///< MPPassManager 
+  PMT_CallGraphPassManager,  ///< CGPassManager
+  PMT_FunctionPassManager,   ///< FPPassManager
+  PMT_LoopPassManager,       ///< LPPassManager
+  PMT_BasicBlockPassManager, ///< BBPassManager
   PMT_Last
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/PassManager.h b/libclamav/c++/llvm/include/llvm/PassManager.h
index a6703fd..4d91163 100644
--- a/libclamav/c++/llvm/include/llvm/PassManager.h
+++ b/libclamav/c++/llvm/include/llvm/PassManager.h
@@ -24,7 +24,6 @@ namespace llvm {
 class Pass;
 class ModulePass;
 class Module;
-class ModuleProvider;
 
 class PassManagerImpl;
 class FunctionPassManagerImpl;
@@ -71,8 +70,8 @@ private:
 class FunctionPassManager : public PassManagerBase {
 public:
   /// FunctionPassManager ctor - This initializes the pass manager.  It needs,
-  /// but does not take ownership of, the specified module provider.
-  explicit FunctionPassManager(ModuleProvider *P);
+  /// but does not take ownership of, the specified Module.
+  explicit FunctionPassManager(Module *M);
   ~FunctionPassManager();
  
   /// add - Add a pass to the queue of passes to run.  This passes
@@ -96,15 +95,9 @@ public:
   ///
   bool doFinalization();
   
-  /// getModuleProvider - Return the module provider that this passmanager is
-  /// currently using.  This is the module provider that it uses when a function
-  /// is optimized that is non-resident in the module.
-  ModuleProvider *getModuleProvider() const { return MP; }
-  void setModuleProvider(ModuleProvider *NewMP) { MP = NewMP; }
-
 private:
   FunctionPassManagerImpl *FPM;
-  ModuleProvider *MP;
+  Module *M;
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/PassManagers.h b/libclamav/c++/llvm/include/llvm/PassManagers.h
index 443a9e0..d5685c6 100644
--- a/libclamav/c++/llvm/include/llvm/PassManagers.h
+++ b/libclamav/c++/llvm/include/llvm/PassManagers.h
@@ -394,8 +394,8 @@ private:
                          const AnalysisUsage::VectorType &Set) const;
 
   // Set of available Analysis. This information is used while scheduling 
-  // pass. If a pass requires an analysis which is not not available then 
-  // equired analysis pass is scheduled to run before the pass itself is 
+  // pass. If a pass requires an analysis which is not available then 
+  // the required analysis pass is scheduled to run before the pass itself is
   // scheduled to run.
   std::map<AnalysisID, Pass*> AvailableAnalysis;
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/Casting.h b/libclamav/c++/llvm/include/llvm/Support/Casting.h
index 37a7c3b..17bcb59 100644
--- a/libclamav/c++/llvm/include/llvm/Support/Casting.h
+++ b/libclamav/c++/llvm/include/llvm/Support/Casting.h
@@ -180,8 +180,9 @@ template<class To, class From, class SimpleFrom> struct cast_convert_val {
 template<class To, class FromTy> struct cast_convert_val<To,FromTy,FromTy> {
   // This _is_ a simple type, just cast it.
   static typename cast_retty<To, FromTy>::ret_type doit(const FromTy &Val) {
-    return reinterpret_cast<typename cast_retty<To, FromTy>::ret_type>(
-                         const_cast<FromTy&>(Val));
+    typename cast_retty<To, FromTy>::ret_type Res2
+     = (typename cast_retty<To, FromTy>::ret_type)const_cast<FromTy&>(Val);
+    return Res2;
   }
 };
 
diff --git a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
index 7f8b10c..3ee2313 100644
--- a/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
+++ b/libclamav/c++/llvm/include/llvm/Support/CommandLine.h
@@ -1168,7 +1168,7 @@ class bits_storage<DataType, bool> {
 
   template<class T>
   static unsigned Bit(const T &V) {
-    unsigned BitPos = reinterpret_cast<unsigned>(V);
+    unsigned BitPos = (unsigned)V;
     assert(BitPos < sizeof(unsigned) * CHAR_BIT &&
           "enum exceeds width of bit vector!");
     return 1 << BitPos;
diff --git a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
index 1339e9f..ea6c5fd 100644
--- a/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/ConstantFolder.h
@@ -39,6 +39,9 @@ public:
   Constant *CreateNSWAdd(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getNSWAdd(LHS, RHS);
   }
+  Constant *CreateNUWAdd(Constant *LHS, Constant *RHS) const {
+    return ConstantExpr::getNUWAdd(LHS, RHS);
+  }
   Constant *CreateFAdd(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getFAdd(LHS, RHS);
   }
@@ -48,6 +51,9 @@ public:
   Constant *CreateNSWSub(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getNSWSub(LHS, RHS);
   }
+  Constant *CreateNUWSub(Constant *LHS, Constant *RHS) const {
+    return ConstantExpr::getNUWSub(LHS, RHS);
+  }
   Constant *CreateFSub(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getFSub(LHS, RHS);
   }
@@ -57,6 +63,9 @@ public:
   Constant *CreateNSWMul(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getNSWMul(LHS, RHS);
   }
+  Constant *CreateNUWMul(Constant *LHS, Constant *RHS) const {
+    return ConstantExpr::getNUWMul(LHS, RHS);
+  }
   Constant *CreateFMul(Constant *LHS, Constant *RHS) const {
     return ConstantExpr::getFMul(LHS, RHS);
   }
@@ -115,6 +124,9 @@ public:
   Constant *CreateNSWNeg(Constant *C) const {
     return ConstantExpr::getNSWNeg(C);
   }
+  Constant *CreateNUWNeg(Constant *C) const {
+    return ConstantExpr::getNUWNeg(C);
+  }
   Constant *CreateFNeg(Constant *C) const {
     return ConstantExpr::getFNeg(C);
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h b/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
index af546f0..58a24bd 100644
--- a/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
+++ b/libclamav/c++/llvm/include/llvm/Support/FormattedStream.h
@@ -119,7 +119,7 @@ namespace llvm
     /// space.
     ///
     /// \param NewCol - The column to move to.
-    void PadToColumn(unsigned NewCol);
+    formatted_raw_ostream &PadToColumn(unsigned NewCol);
 
   private:
     void releaseStream() {
diff --git a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
index eabf6ad..c8aef9c 100644
--- a/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/IRBuilder.h
@@ -94,7 +94,7 @@ public:
   //===--------------------------------------------------------------------===//
   
   /// CreateGlobalString - Make a new global variable with an initializer that
-  /// has array of i8 type filled in the the nul terminated string value
+  /// has array of i8 type filled in with the nul terminated string value
   /// specified.  If Name is specified, it is the name of the global variable
   /// created.
   Value *CreateGlobalString(const char *Str = "", const Twine &Name = "");
@@ -318,6 +318,12 @@ public:
         return Folder.CreateNSWAdd(LC, RC);
     return Insert(BinaryOperator::CreateNSWAdd(LHS, RHS), Name);
   }
+  Value *CreateNUWAdd(Value *LHS, Value *RHS, const Twine &Name = "") {
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      if (Constant *RC = dyn_cast<Constant>(RHS))
+        return Folder.CreateNUWAdd(LC, RC);
+    return Insert(BinaryOperator::CreateNUWAdd(LHS, RHS), Name);
+  }
   Value *CreateFAdd(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
@@ -336,6 +342,12 @@ public:
         return Folder.CreateNSWSub(LC, RC);
     return Insert(BinaryOperator::CreateNSWSub(LHS, RHS), Name);
   }
+  Value *CreateNUWSub(Value *LHS, Value *RHS, const Twine &Name = "") {
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      if (Constant *RC = dyn_cast<Constant>(RHS))
+        return Folder.CreateNUWSub(LC, RC);
+    return Insert(BinaryOperator::CreateNUWSub(LHS, RHS), Name);
+  }
   Value *CreateFSub(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
@@ -354,6 +366,12 @@ public:
         return Folder.CreateNSWMul(LC, RC);
     return Insert(BinaryOperator::CreateNSWMul(LHS, RHS), Name);
   }
+  Value *CreateNUWMul(Value *LHS, Value *RHS, const Twine &Name = "") {
+    if (Constant *LC = dyn_cast<Constant>(LHS))
+      if (Constant *RC = dyn_cast<Constant>(RHS))
+        return Folder.CreateNUWMul(LC, RC);
+    return Insert(BinaryOperator::CreateNUWMul(LHS, RHS), Name);
+  }
   Value *CreateFMul(Value *LHS, Value *RHS, const Twine &Name = "") {
     if (Constant *LC = dyn_cast<Constant>(LHS))
       if (Constant *RC = dyn_cast<Constant>(RHS))
@@ -484,6 +502,11 @@ public:
       return Folder.CreateNSWNeg(VC);
     return Insert(BinaryOperator::CreateNSWNeg(V), Name);
   }
+  Value *CreateNUWNeg(Value *V, const Twine &Name = "") {
+    if (Constant *VC = dyn_cast<Constant>(V))
+      return Folder.CreateNUWNeg(VC);
+    return Insert(BinaryOperator::CreateNUWNeg(V), Name);
+  }
   Value *CreateFNeg(Value *V, const Twine &Name = "") {
     if (Constant *VC = dyn_cast<Constant>(V))
       return Folder.CreateFNeg(VC);
diff --git a/libclamav/c++/llvm/include/llvm/Support/IRReader.h b/libclamav/c++/llvm/include/llvm/Support/IRReader.h
index e7780b0..66314e0 100644
--- a/libclamav/c++/llvm/include/llvm/Support/IRReader.h
+++ b/libclamav/c++/llvm/include/llvm/Support/IRReader.h
@@ -23,44 +23,39 @@
 #include "llvm/Bitcode/ReaderWriter.h"
 #include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/SourceMgr.h"
-#include "llvm/ModuleProvider.h"
 
 namespace llvm {
 
-  /// If the given MemoryBuffer holds a bitcode image, return a ModuleProvider
-  /// for it which does lazy deserialization of function bodies.  Otherwise,
-  /// attempt to parse it as LLVM Assembly and return a fully populated
-  /// ModuleProvider. This function *always* takes ownership of the given
-  /// MemoryBuffer.
-  inline ModuleProvider *getIRModuleProvider(MemoryBuffer *Buffer,
-                                             SMDiagnostic &Err,
-                                             LLVMContext &Context) {
+  /// If the given MemoryBuffer holds a bitcode image, return a Module for it
+  /// which does lazy deserialization of function bodies.  Otherwise, attempt to
+  /// parse it as LLVM Assembly and return a fully populated Module. This
+  /// function *always* takes ownership of the given MemoryBuffer.
+  inline Module *getLazyIRModule(MemoryBuffer *Buffer,
+                                 SMDiagnostic &Err,
+                                 LLVMContext &Context) {
     if (isBitcode((const unsigned char *)Buffer->getBufferStart(),
                   (const unsigned char *)Buffer->getBufferEnd())) {
       std::string ErrMsg;
-      ModuleProvider *MP = getBitcodeModuleProvider(Buffer, Context, &ErrMsg);
-      if (MP == 0) {
+      Module *M = getLazyBitcodeModule(Buffer, Context, &ErrMsg);
+      if (M == 0) {
         Err = SMDiagnostic(Buffer->getBufferIdentifier(), -1, -1, ErrMsg, "");
         // ParseBitcodeFile does not take ownership of the Buffer in the
         // case of an error.
         delete Buffer;
       }
-      return MP;
+      return M;
     }
 
-    Module *M = ParseAssembly(Buffer, 0, Err, Context);
-    if (M == 0)
-      return 0;
-    return new ExistingModuleProvider(M);
+    return ParseAssembly(Buffer, 0, Err, Context);
   }
 
-  /// If the given file holds a bitcode image, return a ModuleProvider
+  /// If the given file holds a bitcode image, return a Module
   /// for it which does lazy deserialization of function bodies.  Otherwise,
   /// attempt to parse it as LLVM Assembly and return a fully populated
-  /// ModuleProvider.
-  inline ModuleProvider *getIRFileModuleProvider(const std::string &Filename,
-                                                 SMDiagnostic &Err,
-                                                 LLVMContext &Context) {
+  /// Module.
+  inline Module *getLazyIRFileModule(const std::string &Filename,
+                                     SMDiagnostic &Err,
+                                     LLVMContext &Context) {
     std::string ErrMsg;
     MemoryBuffer *F = MemoryBuffer::getFileOrSTDIN(Filename.c_str(), &ErrMsg);
     if (F == 0) {
@@ -69,7 +64,7 @@ namespace llvm {
       return 0;
     }
 
-    return getIRModuleProvider(F, Err, Context);
+    return getLazyIRModule(F, Err, Context);
   }
 
   /// If the given MemoryBuffer holds a bitcode image, return a Module
diff --git a/libclamav/c++/llvm/include/llvm/Support/MachO.h b/libclamav/c++/llvm/include/llvm/Support/MachO.h
new file mode 100644
index 0000000..e6fccfc
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Support/MachO.h
@@ -0,0 +1,56 @@
+//===-- llvm/Support/MachO.h - The MachO file format ------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines manifest constants for the MachO object file format.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_SUPPORT_MACHO_H
+#define LLVM_SUPPORT_MACHO_H
+
+// NOTE: The enums in this file are intentially named to be different than those
+// in the headers in /usr/include/mach (on darwin systems) to avoid conflicts
+// with those macros.
+namespace llvm {
+  namespace MachO {
+    // Enums from <mach/machine.h>
+    enum {
+      // Capability bits used in the definition of cpu_type.
+      CPUArchMask = 0xff000000,   // Mask for architecture bits
+      CPUArchABI64 = 0x01000000,  // 64 bit ABI
+      
+      // Constants for the cputype field.
+      CPUTypeI386      = 7,
+      CPUTypeX86_64    = CPUTypeI386 | CPUArchABI64,
+      CPUTypeARM       = 12,
+      CPUTypeSPARC     = 14,
+      CPUTypePowerPC   = 18,
+      CPUTypePowerPC64 = CPUTypePowerPC | CPUArchABI64,
+
+
+      // Constants for the cpusubtype field.
+      
+      // X86
+      CPUSubType_I386_ALL    = 3,
+      CPUSubType_X86_64_ALL  = 3,
+      
+      // ARM
+      CPUSubType_ARM_ALL     = 0,
+      CPUSubType_ARM_V4T     = 5,
+      CPUSubType_ARM_V6      = 6,
+
+      // PowerPC
+      CPUSubType_POWERPC_ALL = 0,
+      
+      CPUSubType_SPARC_ALL   = 0
+    };
+  } // end namespace MachO
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
index 78a9035..01256e1 100644
--- a/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/NoFolder.h
@@ -45,6 +45,9 @@ public:
   Value *CreateNSWAdd(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateNSWAdd(LHS, RHS);
   }
+  Value *CreateNUWAdd(Constant *LHS, Constant *RHS) const {
+    return BinaryOperator::CreateNUWAdd(LHS, RHS);
+  }
   Value *CreateFAdd(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateFAdd(LHS, RHS);
   }
@@ -54,6 +57,9 @@ public:
   Value *CreateNSWSub(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateNSWSub(LHS, RHS);
   }
+  Value *CreateNUWSub(Constant *LHS, Constant *RHS) const {
+    return BinaryOperator::CreateNUWSub(LHS, RHS);
+  }
   Value *CreateFSub(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateFSub(LHS, RHS);
   }
@@ -63,6 +69,9 @@ public:
   Value *CreateNSWMul(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateNSWMul(LHS, RHS);
   }
+  Value *CreateNUWMul(Constant *LHS, Constant *RHS) const {
+    return BinaryOperator::CreateNUWMul(LHS, RHS);
+  }
   Value *CreateFMul(Constant *LHS, Constant *RHS) const {
     return BinaryOperator::CreateFMul(LHS, RHS);
   }
@@ -121,6 +130,9 @@ public:
   Value *CreateNSWNeg(Constant *C) const {
     return BinaryOperator::CreateNSWNeg(C);
   }
+  Value *CreateNUWNeg(Constant *C) const {
+    return BinaryOperator::CreateNUWNeg(C);
+  }
   Value *CreateNot(Constant *C) const {
     return BinaryOperator::CreateNot(C);
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h b/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
index 5433a00..fd56b16 100644
--- a/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
+++ b/libclamav/c++/llvm/include/llvm/Support/SourceMgr.h
@@ -34,33 +34,33 @@ class SourceMgr {
   struct SrcBuffer {
     /// Buffer - The memory buffer for the file.
     MemoryBuffer *Buffer;
-    
+
     /// IncludeLoc - This is the location of the parent include, or null if at
     /// the top level.
     SMLoc IncludeLoc;
   };
-  
+
   /// Buffers - This is all of the buffers that we are reading from.
   std::vector<SrcBuffer> Buffers;
-  
+
   // IncludeDirectories - This is the list of directories we should search for
   // include files in.
   std::vector<std::string> IncludeDirectories;
-  
+
   /// LineNoCache - This is a cache for line number queries, its implementation
   /// is really private to SourceMgr.cpp.
   mutable void *LineNoCache;
-  
+
   SourceMgr(const SourceMgr&);    // DO NOT IMPLEMENT
   void operator=(const SourceMgr&); // DO NOT IMPLEMENT
 public:
   SourceMgr() : LineNoCache(0) {}
   ~SourceMgr();
-  
+
   void setIncludeDirs(const std::vector<std::string> &Dirs) {
     IncludeDirectories = Dirs;
   }
-  
+
   const SrcBuffer &getBufferInfo(unsigned i) const {
     assert(i < Buffers.size() && "Invalid Buffer ID!");
     return Buffers[i];
@@ -70,12 +70,12 @@ public:
     assert(i < Buffers.size() && "Invalid Buffer ID!");
     return Buffers[i].Buffer;
   }
-  
+
   SMLoc getParentIncludeLoc(unsigned i) const {
     assert(i < Buffers.size() && "Invalid Buffer ID!");
     return Buffers[i].IncludeLoc;
   }
-  
+
   unsigned AddNewSourceBuffer(MemoryBuffer *F, SMLoc IncludeLoc) {
     SrcBuffer NB;
     NB.Buffer = F;
@@ -83,20 +83,20 @@ public:
     Buffers.push_back(NB);
     return Buffers.size()-1;
   }
-  
+
   /// AddIncludeFile - Search for a file with the specified name in the current
   /// directory or in one of the IncludeDirs.  If no file is found, this returns
   /// ~0, otherwise it returns the buffer ID of the stacked file.
   unsigned AddIncludeFile(const std::string &Filename, SMLoc IncludeLoc);
-  
+
   /// FindBufferContainingLoc - Return the ID of the buffer containing the
   /// specified location, returning -1 if not found.
   int FindBufferContainingLoc(SMLoc Loc) const;
-  
+
   /// FindLineNumber - Find the line number for the specified location in the
   /// specified file.  This is not a fast method.
   unsigned FindLineNumber(SMLoc Loc, int BufferID = -1) const;
-  
+
   /// PrintMessage - Emit a message about the specified location with the
   /// specified string.
   ///
@@ -105,8 +105,8 @@ public:
   /// @param ShowLine - Should the diagnostic show the source line.
   void PrintMessage(SMLoc Loc, const std::string &Msg, const char *Type,
                     bool ShowLine = true) const;
-  
-  
+
+
   /// GetMessage - Return an SMDiagnostic at the specified location with the
   /// specified string.
   ///
@@ -116,13 +116,13 @@ public:
   SMDiagnostic GetMessage(SMLoc Loc,
                           const std::string &Msg, const char *Type,
                           bool ShowLine = true) const;
-  
-  
+
+
 private:
   void PrintIncludeStack(SMLoc IncludeLoc, raw_ostream &OS) const;
 };
 
-  
+
 /// SMDiagnostic - Instances of this class encapsulate one diagnostic report,
 /// allowing printing to a raw_ostream as a caret diagnostic.
 class SMDiagnostic {
@@ -132,16 +132,16 @@ class SMDiagnostic {
   unsigned ShowLine : 1;
 
 public:
-  SMDiagnostic() : LineNo(0), ColumnNo(0) {}
+  SMDiagnostic() : LineNo(0), ColumnNo(0), ShowLine(0) {}
   SMDiagnostic(const std::string &FN, int Line, int Col,
                const std::string &Msg, const std::string &LineStr,
                bool showline = true)
     : Filename(FN), LineNo(Line), ColumnNo(Col), Message(Msg),
       LineContents(LineStr), ShowLine(showline) {}
 
-  void Print(const char *ProgName, raw_ostream &S);
+  void Print(const char *ProgName, raw_ostream &S) const;
 };
-  
+
 }  // end llvm namespace
 
 #endif
diff --git a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
index 59dd29b..384c493 100644
--- a/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/TargetFolder.h
@@ -52,6 +52,9 @@ public:
   Constant *CreateNSWAdd(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getNSWAdd(LHS, RHS));
   }
+  Constant *CreateNUWAdd(Constant *LHS, Constant *RHS) const {
+    return Fold(ConstantExpr::getNUWAdd(LHS, RHS));
+  }
   Constant *CreateFAdd(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getFAdd(LHS, RHS));
   }
@@ -61,6 +64,9 @@ public:
   Constant *CreateNSWSub(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getNSWSub(LHS, RHS));
   }
+  Constant *CreateNUWSub(Constant *LHS, Constant *RHS) const {
+    return Fold(ConstantExpr::getNUWSub(LHS, RHS));
+  }
   Constant *CreateFSub(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getFSub(LHS, RHS));
   }
@@ -70,6 +76,9 @@ public:
   Constant *CreateNSWMul(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getNSWMul(LHS, RHS));
   }
+  Constant *CreateNUWMul(Constant *LHS, Constant *RHS) const {
+    return Fold(ConstantExpr::getNUWMul(LHS, RHS));
+  }
   Constant *CreateFMul(Constant *LHS, Constant *RHS) const {
     return Fold(ConstantExpr::getFMul(LHS, RHS));
   }
@@ -128,6 +137,9 @@ public:
   Constant *CreateNSWNeg(Constant *C) const {
     return Fold(ConstantExpr::getNSWNeg(C));
   }
+  Constant *CreateNUWNeg(Constant *C) const {
+    return Fold(ConstantExpr::getNUWNeg(C));
+  }
   Constant *CreateFNeg(Constant *C) const {
     return Fold(ConstantExpr::getFNeg(C));
   }
diff --git a/libclamav/c++/llvm/include/llvm/Support/TypeBuilder.h b/libclamav/c++/llvm/include/llvm/Support/TypeBuilder.h
index fb22e3f..270ac52 100644
--- a/libclamav/c++/llvm/include/llvm/Support/TypeBuilder.h
+++ b/libclamav/c++/llvm/include/llvm/Support/TypeBuilder.h
@@ -231,6 +231,12 @@ public:
 /// we special case it.
 template<> class TypeBuilder<void*, false>
   : public TypeBuilder<types::i<8>*, false> {};
+template<> class TypeBuilder<const void*, false>
+  : public TypeBuilder<types::i<8>*, false> {};
+template<> class TypeBuilder<volatile void*, false>
+  : public TypeBuilder<types::i<8>*, false> {};
+template<> class TypeBuilder<const volatile void*, false>
+  : public TypeBuilder<types::i<8>*, false> {};
 
 template<typename R, bool cross> class TypeBuilder<R(), cross> {
 public:
diff --git a/libclamav/c++/llvm/include/llvm/System/DynamicLibrary.h b/libclamav/c++/llvm/include/llvm/System/DynamicLibrary.h
index ac58407..745b8f8 100644
--- a/libclamav/c++/llvm/include/llvm/System/DynamicLibrary.h
+++ b/libclamav/c++/llvm/include/llvm/System/DynamicLibrary.h
@@ -23,7 +23,7 @@ namespace sys {
   /// might be known as shared libraries, shared objects, dynamic shared
   /// objects, or dynamic link libraries. Regardless of the terminology or the
   /// operating system interface, this class provides a portable interface that
-  /// allows dynamic libraries to be loaded and and searched for externally
+  /// allows dynamic libraries to be loaded and searched for externally
   /// defined symbols. This is typically used to provide "plug-in" support.
   /// It also allows for symbols to be defined which don't live in any library,
   /// but rather the main program itself, useful on Windows where the main
diff --git a/libclamav/c++/llvm/include/llvm/System/Path.h b/libclamav/c++/llvm/include/llvm/System/Path.h
index bdfb9aa..1be27b2 100644
--- a/libclamav/c++/llvm/include/llvm/System/Path.h
+++ b/libclamav/c++/llvm/include/llvm/System/Path.h
@@ -28,7 +28,7 @@ namespace sys {
   /// platform independent and eliminates many of the unix-specific fields.
   /// However, to support llvm-ar, the mode, user, and group fields are
   /// retained. These pertain to unix security and may not have a meaningful
-  /// value on non-Unix platforms. However, the other fields fields should
+  /// value on non-Unix platforms. However, the other fields should
   /// always be applicable on all platforms.  The structure is filled in by
   /// the PathWithStatus class.
   /// @brief File status structure
diff --git a/libclamav/c++/llvm/include/llvm/System/Program.h b/libclamav/c++/llvm/include/llvm/System/Program.h
index 6799562..69ce478 100644
--- a/libclamav/c++/llvm/include/llvm/System/Program.h
+++ b/libclamav/c++/llvm/include/llvm/System/Program.h
@@ -120,10 +120,12 @@ namespace sys {
     /// @brief Construct a Program by finding it by name.
     static Path FindProgramByName(const std::string& name);
 
-    // These methods change the specified standard stream (stdin or stdout) to
-    // binary mode. They return true if an error occurred
+    // These methods change the specified standard stream (stdin,
+    // stdout, or stderr) to binary mode. They return true if an error
+    // occurred
     static bool ChangeStdinToBinary();
     static bool ChangeStdoutToBinary();
+    static bool ChangeStderrToBinary();
 
     /// A convenience function equivalent to Program prg; prg.Execute(..);
     /// prg.Wait(..);
diff --git a/libclamav/c++/llvm/include/llvm/Target/Mangler.h b/libclamav/c++/llvm/include/llvm/Target/Mangler.h
index 04de4e9..45cbf9d 100644
--- a/libclamav/c++/llvm/include/llvm/Target/Mangler.h
+++ b/libclamav/c++/llvm/include/llvm/Target/Mangler.h
@@ -1,4 +1,4 @@
-//===-- llvm/Support/Mangler.h - Self-contained name mangler ----*- C++ -*-===//
+//===-- llvm/Target/Mangler.h - Self-contained name mangler ----*- C++ -*-===//
 //
 //                     The LLVM Compiler Infrastructure
 //
diff --git a/libclamav/c++/llvm/include/llvm/Target/Target.td b/libclamav/c++/llvm/include/llvm/Target/Target.td
index 354e743..9a117df 100644
--- a/libclamav/c++/llvm/include/llvm/Target/Target.td
+++ b/libclamav/c++/llvm/include/llvm/Target/Target.td
@@ -376,7 +376,7 @@ class OptionalDefOperand<ValueType ty, dag OpTypes, dag defaultops>
 
 
 // InstrInfo - This class should only be instantiated once to provide parameters
-// which are global to the the target machine.
+// which are global to the target machine.
 //
 class InstrInfo {
   // If the target wants to associate some target-specific information with each
@@ -399,19 +399,19 @@ def PHI : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops variable_ops);
   let AsmString = "PHINODE";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
 }
 def INLINEASM : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops variable_ops);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
 }
 def DBG_LABEL : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops i32imm:$id);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let hasCtrlDep = 1;
   let isNotDuplicable = 1;
 }
@@ -419,7 +419,7 @@ def EH_LABEL : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops i32imm:$id);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let hasCtrlDep = 1;
   let isNotDuplicable = 1;
 }
@@ -427,7 +427,7 @@ def GC_LABEL : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops i32imm:$id);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let hasCtrlDep = 1;
   let isNotDuplicable = 1;
 }
@@ -435,21 +435,21 @@ def KILL : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops variable_ops);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
 }
 def EXTRACT_SUBREG : Instruction {
   let OutOperandList = (ops unknown:$dst);
   let InOperandList = (ops unknown:$supersrc, i32imm:$subidx);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
 }
 def INSERT_SUBREG : Instruction {
   let OutOperandList = (ops unknown:$dst);
   let InOperandList = (ops unknown:$supersrc, unknown:$subsrc, i32imm:$subidx);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
   let Constraints = "$supersrc = $dst";
 }
@@ -457,7 +457,7 @@ def IMPLICIT_DEF : Instruction {
   let OutOperandList = (ops unknown:$dst);
   let InOperandList = (ops);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
   let isReMaterializable = 1;
   let isAsCheapAsAMove = 1;
@@ -466,22 +466,22 @@ def SUBREG_TO_REG : Instruction {
   let OutOperandList = (ops unknown:$dst);
   let InOperandList = (ops unknown:$implsrc, unknown:$subsrc, i32imm:$subidx);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
 }
 def COPY_TO_REGCLASS : Instruction {
   let OutOperandList = (ops unknown:$dst);
   let InOperandList = (ops unknown:$src, i32imm:$regclass);
   let AsmString = "";
-  let Namespace = "TargetInstrInfo";
+  let Namespace = "TargetOpcode";
   let neverHasSideEffects = 1;
   let isAsCheapAsAMove = 1;
 }
-def DEBUG_VALUE : Instruction {
+def DBG_VALUE : Instruction {
   let OutOperandList = (ops);
   let InOperandList = (ops variable_ops);
-  let AsmString = "DEBUG_VALUE";
-  let Namespace = "TargetInstrInfo";
+  let AsmString = "DBG_VALUE";
+  let Namespace = "TargetOpcode";
   let isAsCheapAsAMove = 1;
 }
 }
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetAsmLexer.h b/libclamav/c++/llvm/include/llvm/Target/TargetAsmLexer.h
index daba1ba..9fcf449 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetAsmLexer.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetAsmLexer.h
@@ -38,12 +38,22 @@ protected: // Can only create subclasses.
   
   /// TheTarget - The Target that this machine was created for.
   const Target &TheTarget;
+  MCAsmLexer *Lexer;
   
 public:
   virtual ~TargetAsmLexer();
   
   const Target &getTarget() const { return TheTarget; }
   
+  /// InstallLexer - Set the lexer to get tokens from lower-level lexer \arg L.
+  void InstallLexer(MCAsmLexer &L) {
+    Lexer = &L;
+  }
+  
+  MCAsmLexer *getLexer() {
+    return Lexer;
+  }
+  
   /// Lex - Consume the next token from the input stream and return it.
   const AsmToken &Lex() {
     return CurTok = LexToken();
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
index 7144fe0..d95e4e8 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetInstrInfo.h
@@ -45,55 +45,6 @@ public:
   TargetInstrInfo(const TargetInstrDesc *desc, unsigned NumOpcodes);
   virtual ~TargetInstrInfo();
 
-  // Invariant opcodes: All instruction sets have these as their low opcodes.
-  enum { 
-    PHI = 0,
-    INLINEASM = 1,
-    DBG_LABEL = 2,
-    EH_LABEL = 3,
-    GC_LABEL = 4,
-
-    /// KILL - This instruction is a noop that is used only to adjust the liveness
-    /// of registers. This can be useful when dealing with sub-registers.
-    KILL = 5,
-
-    /// EXTRACT_SUBREG - This instruction takes two operands: a register
-    /// that has subregisters, and a subregister index. It returns the
-    /// extracted subregister value. This is commonly used to implement
-    /// truncation operations on target architectures which support it.
-    EXTRACT_SUBREG = 6,
-
-    /// INSERT_SUBREG - This instruction takes three operands: a register
-    /// that has subregisters, a register providing an insert value, and a
-    /// subregister index. It returns the value of the first register with
-    /// the value of the second register inserted. The first register is
-    /// often defined by an IMPLICIT_DEF, as is commonly used to implement
-    /// anyext operations on target architectures which support it.
-    INSERT_SUBREG = 7,
-
-    /// IMPLICIT_DEF - This is the MachineInstr-level equivalent of undef.
-    IMPLICIT_DEF = 8,
-
-    /// SUBREG_TO_REG - This instruction is similar to INSERT_SUBREG except
-    /// that the first operand is an immediate integer constant. This constant
-    /// is often zero, as is commonly used to implement zext operations on
-    /// target architectures which support it, such as with x86-64 (with
-    /// zext from i32 to i64 via implicit zero-extension).
-    SUBREG_TO_REG = 9,
-
-    /// COPY_TO_REGCLASS - This instruction is a placeholder for a plain
-    /// register-to-register copy into a specific register class. This is only
-    /// used between instruction selection and MachineInstr creation, before
-    /// virtual registers have been created for all the instructions, and it's
-    /// only needed in cases where the register classes implied by the
-    /// instructions are insufficient. The actual MachineInstrs to perform
-    /// the copy are emitted with the TargetInstrInfo::copyRegToReg hook.
-    COPY_TO_REGCLASS = 10,
-
-    // DEBUG_VALUE - a mapping of the llvm.dbg.value intrinsic
-    DEBUG_VALUE = 11
-  };
-
   unsigned getNumOpcodes() const { return NumOpcodes; }
 
   /// get - Return the machine instruction descriptor that corresponds to the
@@ -109,7 +60,7 @@ public:
   /// that aren't always available.
   bool isTriviallyReMaterializable(const MachineInstr *MI,
                                    AliasAnalysis *AA = 0) const {
-    return MI->getOpcode() == IMPLICIT_DEF ||
+    return MI->getOpcode() == TargetOpcode::IMPLICIT_DEF ||
            (MI->getDesc().isRematerializable() &&
             (isReallyTriviallyReMaterializable(MI, AA) ||
              isReallyTriviallyReMaterializableGeneric(MI, AA)));
@@ -167,12 +118,12 @@ public:
         SrcReg == DstReg)
       return true;
 
-    if (MI.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG &&
+    if (MI.getOpcode() == TargetOpcode::EXTRACT_SUBREG &&
         MI.getOperand(0).getReg() == MI.getOperand(1).getReg())
     return true;
 
-    if ((MI.getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-         MI.getOpcode() == TargetInstrInfo::SUBREG_TO_REG) &&
+    if ((MI.getOpcode() == TargetOpcode::INSERT_SUBREG ||
+         MI.getOpcode() == TargetOpcode::SUBREG_TO_REG) &&
         MI.getOperand(0).getReg() == MI.getOperand(2).getReg())
       return true;
     return false;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
index aab70cb..c6ac89a 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetLowering.h
@@ -346,6 +346,11 @@ public:
     return true;
   }
 
+  /// canOpTrap - Returns true if the operation can trap for the value type.
+  /// VT must be a legal type. By default, we optimistically assume most
+  /// operations don't trap except for divide and remainder.
+  virtual bool canOpTrap(unsigned Op, EVT VT) const;
+
   /// isVectorClearMaskLegal - Similar to isShuffleMaskLegal. This is
   /// used by Targets can use this to indicate if there is a suitable
   /// VECTOR_SHUFFLE that can be used to replace a VAND with a constant
@@ -1167,15 +1172,9 @@ public:
   /// described by the Ins array. The implementation should fill in the
   /// InVals array with legal-type return values from the call, and return
   /// the resulting token chain value.
-  ///
-  /// The isTailCall flag here is normative. If it is true, the
-  /// implementation must emit a tail call. The
-  /// IsEligibleForTailCallOptimization hook should be used to catch
-  /// cases that cannot be handled.
-  ///
   virtual SDValue
     LowerCall(SDValue Chain, SDValue Callee,
-              CallingConv::ID CallConv, bool isVarArg, bool isTailCall,
+              CallingConv::ID CallConv, bool isVarArg, bool &isTailCall,
               const SmallVectorImpl<ISD::OutputArg> &Outs,
               const SmallVectorImpl<ISD::InputArg> &Ins,
               DebugLoc dl, SelectionDAG &DAG,
@@ -1301,19 +1300,6 @@ public:
     assert(0 && "ReplaceNodeResults not implemented for this target!");
   }
 
-  /// IsEligibleForTailCallOptimization - Check whether the call is eligible for
-  /// tail call optimization. Targets which want to do tail call optimization
-  /// should override this function.
-  virtual bool
-  IsEligibleForTailCallOptimization(SDValue Callee,
-                                    CallingConv::ID CalleeCC,
-                                    bool isVarArg,
-                                    const SmallVectorImpl<ISD::InputArg> &Ins,
-                                    SelectionDAG& DAG) const {
-    // Conservative default: no calls are eligible.
-    return false;
-  }
-
   /// getTargetNodeName() - This method returns the name of a target specific
   /// DAG node.
   virtual const char *getTargetNodeName(unsigned Opcode) const;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetMachOWriterInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetMachOWriterInfo.h
deleted file mode 100644
index f723bb5..0000000
--- a/libclamav/c++/llvm/include/llvm/Target/TargetMachOWriterInfo.h
+++ /dev/null
@@ -1,112 +0,0 @@
-//===-- llvm/Target/TargetMachOWriterInfo.h - MachO Writer Info--*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the TargetMachOWriterInfo class.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_TARGET_TARGETMACHOWRITERINFO_H
-#define LLVM_TARGET_TARGETMACHOWRITERINFO_H
-
-#include "llvm/CodeGen/MachineRelocation.h"
-
-namespace llvm {
-
-  class MachineBasicBlock;
-  class OutputBuffer;
-
-  //===--------------------------------------------------------------------===//
-  //                        TargetMachOWriterInfo
-  //===--------------------------------------------------------------------===//
-
-  class TargetMachOWriterInfo {
-    uint32_t CPUType;                 // CPU specifier
-    uint32_t CPUSubType;              // Machine specifier
-  public:
-    // The various CPU_TYPE_* constants are already defined by at least one
-    // system header file and create compilation errors if not respected.
-#if !defined(CPU_TYPE_I386)
-#define CPU_TYPE_I386       7
-#endif
-#if !defined(CPU_TYPE_X86_64)
-#define CPU_TYPE_X86_64     (CPU_TYPE_I386 | 0x1000000)
-#endif
-#if !defined(CPU_TYPE_ARM)
-#define CPU_TYPE_ARM        12
-#endif
-#if !defined(CPU_TYPE_SPARC)
-#define CPU_TYPE_SPARC      14
-#endif
-#if !defined(CPU_TYPE_POWERPC)
-#define CPU_TYPE_POWERPC    18
-#endif
-#if !defined(CPU_TYPE_POWERPC64)
-#define CPU_TYPE_POWERPC64  (CPU_TYPE_POWERPC | 0x1000000)
-#endif
-
-    // Constants for the cputype field
-    // see <mach/machine.h>
-    enum {
-      HDR_CPU_TYPE_I386      = CPU_TYPE_I386,
-      HDR_CPU_TYPE_X86_64    = CPU_TYPE_X86_64,
-      HDR_CPU_TYPE_ARM       = CPU_TYPE_ARM,
-      HDR_CPU_TYPE_SPARC     = CPU_TYPE_SPARC,
-      HDR_CPU_TYPE_POWERPC   = CPU_TYPE_POWERPC,
-      HDR_CPU_TYPE_POWERPC64 = CPU_TYPE_POWERPC64
-    };
-      
-#if !defined(CPU_SUBTYPE_I386_ALL)
-#define CPU_SUBTYPE_I386_ALL    3
-#endif
-#if !defined(CPU_SUBTYPE_X86_64_ALL)
-#define CPU_SUBTYPE_X86_64_ALL  3
-#endif
-#if !defined(CPU_SUBTYPE_ARM_ALL)
-#define CPU_SUBTYPE_ARM_ALL     0
-#endif
-#if !defined(CPU_SUBTYPE_SPARC_ALL)
-#define CPU_SUBTYPE_SPARC_ALL   0
-#endif
-#if !defined(CPU_SUBTYPE_POWERPC_ALL)
-#define CPU_SUBTYPE_POWERPC_ALL 0
-#endif
-
-    // Constants for the cpusubtype field
-    // see <mach/machine.h>
-    enum {
-      HDR_CPU_SUBTYPE_I386_ALL    = CPU_SUBTYPE_I386_ALL,
-      HDR_CPU_SUBTYPE_X86_64_ALL  = CPU_SUBTYPE_X86_64_ALL,
-      HDR_CPU_SUBTYPE_ARM_ALL     = CPU_SUBTYPE_ARM_ALL,
-      HDR_CPU_SUBTYPE_SPARC_ALL   = CPU_SUBTYPE_SPARC_ALL,
-      HDR_CPU_SUBTYPE_POWERPC_ALL = CPU_SUBTYPE_POWERPC_ALL
-    };
-
-    TargetMachOWriterInfo(uint32_t cputype, uint32_t cpusubtype)
-      : CPUType(cputype), CPUSubType(cpusubtype) {}
-    virtual ~TargetMachOWriterInfo();
-
-    virtual MachineRelocation GetJTRelocation(unsigned Offset,
-                                              MachineBasicBlock *MBB) const;
-
-    virtual unsigned GetTargetRelocation(MachineRelocation &MR,
-                                         unsigned FromIdx,
-                                         unsigned ToAddr,
-                                         unsigned ToIdx,
-                                         OutputBuffer &RelocOut,
-                                         OutputBuffer &SecOut,
-                                         bool Scattered,
-                                         bool Extern) const { return 0; }
-
-    uint32_t getCPUType() const { return CPUType; }
-    uint32_t getCPUSubType() const { return CPUSubType; }
-  };
-
-} // end llvm namespace
-
-#endif // LLVM_TARGET_TARGETMACHOWRITERINFO_H
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
index 4db3d3e..63e28ac 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetMachine.h
@@ -29,14 +29,11 @@ class TargetIntrinsicInfo;
 class TargetJITInfo;
 class TargetLowering;
 class TargetFrameInfo;
-class MachineCodeEmitter;
 class JITCodeEmitter;
-class ObjectCodeEmitter;
 class TargetRegisterInfo;
 class PassManagerBase;
 class PassManager;
 class Pass;
-class TargetMachOWriterInfo;
 class TargetELFWriterInfo;
 class formatted_raw_ostream;
 
@@ -61,16 +58,6 @@ namespace CodeModel {
   };
 }
 
-namespace FileModel {
-  enum Model {
-    Error,
-    None,
-    AsmFile,
-    MachOFile,
-    ElfFile
-  };
-}
-
 // Code generation optimization level.
 namespace CodeGenOpt {
   enum Level {
@@ -163,11 +150,6 @@ public:
     return InstrItineraryData();
   }
 
-  /// getMachOWriterInfo - If this target supports a Mach-O writer, return
-  /// information for it, otherwise return null.
-  /// 
-  virtual const TargetMachOWriterInfo *getMachOWriterInfo() const { return 0; }
-
   /// getELFWriterInfo - If this target supports an ELF writer, return
   /// information for it, otherwise return null.
   /// 
@@ -212,9 +194,12 @@ public:
   }
 
   /// CodeGenFileType - These enums are meant to be passed into
-  /// addPassesToEmitFile to indicate what type of file to emit.
+  /// addPassesToEmitFile to indicate what type of file to emit, and returned by
+  /// it to indicate what type of file could actually be made.
   enum CodeGenFileType {
-    AssemblyFile, ObjectFile, DynamicLibrary
+    CGFT_AssemblyFile,
+    CGFT_ObjectFile,
+    CGFT_Null         // Do not emit any output.
   };
 
   /// getEnableTailMergeDefault - the default setting for -enable-tail-merge
@@ -223,61 +208,17 @@ public:
 
   /// addPassesToEmitFile - Add passes to the specified pass manager to get the
   /// specified file emitted.  Typically this will involve several steps of code
-  /// generation.
-  /// This method should return FileModel::Error if emission of this file type
-  /// is not supported.
-  ///
-  virtual FileModel::Model addPassesToEmitFile(PassManagerBase &,
-                                               formatted_raw_ostream &,
-                                               CodeGenFileType,
-                                               CodeGenOpt::Level) {
-    return FileModel::None;
-  }
-
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &,
-                                         MachineCodeEmitter *,
-                                         CodeGenOpt::Level) {
-    return true;
-  }
- 
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &,
-                                         JITCodeEmitter *,
-                                         CodeGenOpt::Level) {
-    return true;
-  }
- 
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &,
-                                         ObjectCodeEmitter *,
-                                         CodeGenOpt::Level) {
-    return true;
-  }
- 
-  /// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-  /// get machine code emitted.  This uses a MachineCodeEmitter object to handle
-  /// actually outputting the machine code and resolving things like the address
-  /// of functions.  This method returns true if machine code emission is
-  /// not supported.
-  ///
-  virtual bool addPassesToEmitMachineCode(PassManagerBase &,
-                                          MachineCodeEmitter &,
-                                          CodeGenOpt::Level) {
+  /// generation.  This method should return true if emission of this file type
+  /// is not supported, or false on success.
+  virtual bool addPassesToEmitFile(PassManagerBase &,
+                                   formatted_raw_ostream &,
+                                   CodeGenFileType Filetype,
+                                   CodeGenOpt::Level) {
     return true;
   }
 
   /// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-  /// get machine code emitted.  This uses a MachineCodeEmitter object to handle
+  /// get machine code emitted.  This uses a JITCodeEmitter object to handle
   /// actually outputting the machine code and resolving things like the address
   /// of functions.  This method returns true if machine code emission is
   /// not supported.
@@ -312,9 +253,6 @@ protected: // Can only create subclasses.
   bool addCommonCodeGenPasses(PassManagerBase &, CodeGenOpt::Level);
 
 private:
-  // These routines are used by addPassesToEmitFileFinish and
-  // addPassesToEmitMachineCode to set the CodeModel if it's still marked
-  // as default.
   virtual void setCodeModelForJIT();
   virtual void setCodeModelForStatic();
   
@@ -322,56 +260,15 @@ public:
   
   /// addPassesToEmitFile - Add passes to the specified pass manager to get the
   /// specified file emitted.  Typically this will involve several steps of code
-  /// generation.  If OptLevel is None, the code generator should emit code as fast
-  /// as possible, though the generated code may be less efficient.  This method
-  /// should return FileModel::Error if emission of this file type is not
-  /// supported.
-  ///
-  /// The default implementation of this method adds components from the
-  /// LLVM retargetable code generator, invoking the methods below to get
-  /// target-specific passes in standard locations.
-  ///
-  virtual FileModel::Model addPassesToEmitFile(PassManagerBase &PM,
-                                               formatted_raw_ostream &Out,
-                                               CodeGenFileType FileType,
-                                               CodeGenOpt::Level);
-  
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &PM,
-                                         MachineCodeEmitter *MCE,
-                                         CodeGenOpt::Level);
- 
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &PM,
-                                         JITCodeEmitter *JCE,
-                                         CodeGenOpt::Level);
- 
-  /// addPassesToEmitFileFinish - If the passes to emit the specified file had
-  /// to be split up (e.g., to add an object writer pass), this method can be
-  /// used to finish up adding passes to emit the file, if necessary.
-  ///
-  virtual bool addPassesToEmitFileFinish(PassManagerBase &PM,
-                                         ObjectCodeEmitter *OCE,
-                                         CodeGenOpt::Level);
- 
-  /// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-  /// get machine code emitted.  This uses a MachineCodeEmitter object to handle
-  /// actually outputting the machine code and resolving things like the address
-  /// of functions.  This method returns true if machine code emission is
-  /// not supported.
-  ///
-  virtual bool addPassesToEmitMachineCode(PassManagerBase &PM,
-                                          MachineCodeEmitter &MCE,
-                                          CodeGenOpt::Level);
+  /// generation.  If OptLevel is None, the code generator should emit code as
+  /// fast as possible, though the generated code may be less efficient.
+  virtual bool addPassesToEmitFile(PassManagerBase &PM,
+                                   formatted_raw_ostream &Out,
+                                   CodeGenFileType FileType,
+                                   CodeGenOpt::Level);
   
   /// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-  /// get machine code emitted.  This uses a MachineCodeEmitter object to handle
+  /// get machine code emitted.  This uses a JITCodeEmitter object to handle
   /// actually outputting the machine code and resolving things like the address
   /// of functions.  This method returns true if machine code emission is
   /// not supported.
@@ -424,61 +321,13 @@ public:
   /// code emitter, if supported.  If this is not supported, 'true' should be
   /// returned.
   virtual bool addCodeEmitter(PassManagerBase &, CodeGenOpt::Level,
-                              MachineCodeEmitter &) {
-    return true;
-  }
-
-  /// addCodeEmitter - This pass should be overridden by the target to add a
-  /// code emitter, if supported.  If this is not supported, 'true' should be
-  /// returned.
-  virtual bool addCodeEmitter(PassManagerBase &, CodeGenOpt::Level,
                               JITCodeEmitter &) {
     return true;
   }
 
-  /// addSimpleCodeEmitter - This pass should be overridden by the target to add
-  /// a code emitter (without setting flags), if supported.  If this is not
-  /// supported, 'true' should be returned.
-  virtual bool addSimpleCodeEmitter(PassManagerBase &, CodeGenOpt::Level,
-                                    MachineCodeEmitter &) {
-    return true;
-  }
-
-  /// addSimpleCodeEmitter - This pass should be overridden by the target to add
-  /// a code emitter (without setting flags), if supported.  If this is not
-  /// supported, 'true' should be returned.
-  virtual bool addSimpleCodeEmitter(PassManagerBase &, CodeGenOpt::Level,
-                                    JITCodeEmitter &) {
-    return true;
-  }
-
-  /// addSimpleCodeEmitter - This pass should be overridden by the target to add
-  /// a code emitter (without setting flags), if supported.  If this is not
-  /// supported, 'true' should be returned.
-  virtual bool addSimpleCodeEmitter(PassManagerBase &, CodeGenOpt::Level,
-                                    ObjectCodeEmitter &) {
-    return true;
-  }
-
   /// getEnableTailMergeDefault - the default setting for -enable-tail-merge
   /// on this target.  User flag overrides.
   virtual bool getEnableTailMergeDefault() const { return true; }
-
-  /// addAssemblyEmitter - Helper function which creates a target specific
-  /// assembly printer, if available.
-  ///
-  /// \return Returns 'false' on success.
-  bool addAssemblyEmitter(PassManagerBase &, CodeGenOpt::Level,
-                          bool /* VerboseAsmDefault */,
-                          formatted_raw_ostream &);
-
-  /// addObjectFileEmitter - Helper function which creates a target specific
-  /// object files emitter, if available.  This interface is temporary, for
-  /// bringing up MCAssembler-based object file emitters.
-  ///
-  /// \return Returns 'false' on success.
-  bool addObjectFileEmitter(PassManagerBase &, CodeGenOpt::Level,
-                            formatted_raw_ostream &);
 };
 
 } // End llvm namespace
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetOpcodes.h b/libclamav/c++/llvm/include/llvm/Target/TargetOpcodes.h
new file mode 100644
index 0000000..10cb45f
--- /dev/null
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetOpcodes.h
@@ -0,0 +1,72 @@
+//===-- llvm/Target/TargetOpcodes.h - Target Indep Opcodes ------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the target independent instruction opcodes.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_TARGET_TARGETOPCODES_H
+#define LLVM_TARGET_TARGETOPCODES_H
+
+namespace llvm {
+  
+// Invariant opcodes: All instruction sets have these as their low opcodes.
+namespace TargetOpcode {
+  enum { 
+    PHI = 0,
+    INLINEASM = 1,
+    DBG_LABEL = 2,
+    EH_LABEL = 3,
+    GC_LABEL = 4,
+    
+    /// KILL - This instruction is a noop that is used only to adjust the
+    /// liveness of registers. This can be useful when dealing with
+    /// sub-registers.
+    KILL = 5,
+    
+    /// EXTRACT_SUBREG - This instruction takes two operands: a register
+    /// that has subregisters, and a subregister index. It returns the
+    /// extracted subregister value. This is commonly used to implement
+    /// truncation operations on target architectures which support it.
+    EXTRACT_SUBREG = 6,
+    
+    /// INSERT_SUBREG - This instruction takes three operands: a register
+    /// that has subregisters, a register providing an insert value, and a
+    /// subregister index. It returns the value of the first register with
+    /// the value of the second register inserted. The first register is
+    /// often defined by an IMPLICIT_DEF, as is commonly used to implement
+    /// anyext operations on target architectures which support it.
+    INSERT_SUBREG = 7,
+    
+    /// IMPLICIT_DEF - This is the MachineInstr-level equivalent of undef.
+    IMPLICIT_DEF = 8,
+    
+    /// SUBREG_TO_REG - This instruction is similar to INSERT_SUBREG except
+    /// that the first operand is an immediate integer constant. This constant
+    /// is often zero, as is commonly used to implement zext operations on
+    /// target architectures which support it, such as with x86-64 (with
+    /// zext from i32 to i64 via implicit zero-extension).
+    SUBREG_TO_REG = 9,
+    
+    /// COPY_TO_REGCLASS - This instruction is a placeholder for a plain
+    /// register-to-register copy into a specific register class. This is only
+    /// used between instruction selection and MachineInstr creation, before
+    /// virtual registers have been created for all the instructions, and it's
+    /// only needed in cases where the register classes implied by the
+    /// instructions are insufficient. The actual MachineInstrs to perform
+    /// the copy are emitted with the TargetInstrInfo::copyRegToReg hook.
+    COPY_TO_REGCLASS = 10,
+    
+    // DBG_VALUE - a mapping of the llvm.dbg.value intrinsic
+    DBG_VALUE = 11
+  };
+} // end namespace TargetOpcode
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h b/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
index b43450d..b63c2bf 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetOptions.h
@@ -116,10 +116,13 @@ namespace llvm {
   /// be emitted for all functions.
   extern bool UnwindTablesMandatory;
 
-  /// PerformTailCallOpt - This flag is enabled when -tailcallopt is specified
-  /// on the commandline. When the flag is on, the target will perform tail call
-  /// optimization (pop the caller's stack) providing it supports it.
-  extern bool PerformTailCallOpt;
+  /// GuaranteedTailCallOpt - This flag is enabled when -tailcallopt is
+  /// specified on the commandline. When the flag is on, participating targets
+  /// will perform tail call optimization on all calls which use the fastcc
+  /// calling convention and which satisfy certain target-independent
+  /// criteria (being at the end of a function, having the same return type
+  /// as their parent function, etc.), using an alternate ABI if necessary.
+  extern bool GuaranteedTailCallOpt;
 
   /// StackAlignment - Override default stack alignment for target.
   extern unsigned StackAlignment;
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
index f93eadb..65b60f7 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetRegisterInfo.h
@@ -169,7 +169,7 @@ public:
     return I;
   }
 
-  /// hasSubClass - return true if the the specified TargetRegisterClass
+  /// hasSubClass - return true if the specified TargetRegisterClass
   /// is a proper subset of this TargetRegisterClass.
   bool hasSubClass(const TargetRegisterClass *cs) const {
     for (int i = 0; SubClasses[i] != NULL; ++i)
@@ -696,12 +696,12 @@ public:
 
   /// getFrameIndexOffset - Returns the displacement from the frame register to
   /// the stack frame of the specified index.
-  virtual int getFrameIndexOffset(MachineFunction &MF, int FI) const;
+  virtual int getFrameIndexOffset(const MachineFunction &MF, int FI) const;
 
   /// getFrameIndexReference - This method should return the base register
   /// and offset used to reference a frame index location. The offset is
   /// returned directly, and the base register is returned via FrameReg.
-  virtual int getFrameIndexReference(MachineFunction &MF, int FI,
+  virtual int getFrameIndexReference(const MachineFunction &MF, int FI,
                                      unsigned &FrameReg) const {
     // By default, assume all frame indices are referenced via whatever
     // getFrameRegister() says. The target can override this if it's doing
diff --git a/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h b/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
index d3aa867..37380ab 100644
--- a/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
+++ b/libclamav/c++/llvm/include/llvm/Target/TargetRegistry.h
@@ -25,12 +25,14 @@
 
 namespace llvm {
   class AsmPrinter;
-  class MCAsmParser;
-  class MCCodeEmitter;
   class Module;
   class MCAsmInfo;
+  class MCAsmParser;
+  class MCCodeEmitter;
+  class MCContext;
   class MCDisassembler;
   class MCInstPrinter;
+  class MCStreamer;
   class TargetAsmLexer;
   class TargetAsmParser;
   class TargetMachine;
@@ -58,8 +60,9 @@ namespace llvm {
                                                   const std::string &Features);
     typedef AsmPrinter *(*AsmPrinterCtorTy)(formatted_raw_ostream &OS,
                                             TargetMachine &TM,
-                                            const MCAsmInfo *MAI,
-                                            bool VerboseAsm);
+                                            MCContext &Ctx,
+                                            MCStreamer &Streamer,
+                                            const MCAsmInfo *MAI);
     typedef TargetAsmLexer *(*AsmLexerCtorTy)(const Target &T,
                                               const MCAsmInfo &MAI);
     typedef TargetAsmParser *(*AsmParserCtorTy)(const Target &T,MCAsmParser &P);
@@ -69,7 +72,8 @@ namespace llvm {
                                                   const MCAsmInfo &MAI,
                                                   raw_ostream &O);
     typedef MCCodeEmitter *(*CodeEmitterCtorTy)(const Target &T,
-                                                TargetMachine &TM);
+                                                TargetMachine &TM,
+                                                MCContext &Ctx);
 
   private:
     /// Next - The next registered target in the linked list, maintained by the
@@ -189,12 +193,14 @@ namespace llvm {
       return TargetMachineCtorFn(*this, Triple, Features);
     }
 
-    /// createAsmPrinter - Create a target specific assembly printer pass.
+    /// createAsmPrinter - Create a target specific assembly printer pass.  This
+    /// takes ownership of the MCContext and MCStreamer objects but not the MAI.
     AsmPrinter *createAsmPrinter(formatted_raw_ostream &OS, TargetMachine &TM,
-                                 const MCAsmInfo *MAI, bool Verbose) const {
+                                 MCContext &Ctx, MCStreamer &Streamer,
+                                 const MCAsmInfo *MAI) const {
       if (!AsmPrinterCtorFn)
         return 0;
-      return AsmPrinterCtorFn(OS, TM, MAI, Verbose);
+      return AsmPrinterCtorFn(OS, TM, Ctx, Streamer, MAI);
     }
 
     /// createAsmLexer - Create a target specific assembly lexer.
@@ -231,10 +237,10 @@ namespace llvm {
     
     
     /// createCodeEmitter - Create a target specific code emitter.
-    MCCodeEmitter *createCodeEmitter(TargetMachine &TM) const {
+    MCCodeEmitter *createCodeEmitter(TargetMachine &TM, MCContext &Ctx) const {
       if (!CodeEmitterCtorFn)
         return 0;
-      return CodeEmitterCtorFn(*this, TM);
+      return CodeEmitterCtorFn(*this, TM, Ctx);
     }
 
     /// @}
@@ -547,8 +553,9 @@ namespace llvm {
 
   private:
     static AsmPrinter *Allocator(formatted_raw_ostream &OS, TargetMachine &TM,
-                                 const MCAsmInfo *MAI, bool Verbose) {
-      return new AsmPrinterImpl(OS, TM, MAI, Verbose);
+                                 MCContext &Ctx, MCStreamer &Streamer,
+                                 const MCAsmInfo *MAI) {
+      return new AsmPrinterImpl(OS, TM, Ctx, Streamer, MAI);
     }
   };
 
@@ -607,8 +614,9 @@ namespace llvm {
     }
 
   private:
-    static MCCodeEmitter *Allocator(const Target &T, TargetMachine &TM) {
-      return new CodeEmitterImpl(T, TM);
+    static MCCodeEmitter *Allocator(const Target &T, TargetMachine &TM,
+                                    MCContext &Ctx) {
+      return new CodeEmitterImpl(T, TM, Ctx);
     }
   };
 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/IPO/InlinerPass.h b/libclamav/c++/llvm/include/llvm/Transforms/IPO/InlinerPass.h
index dc5e644..30ece0e 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/IPO/InlinerPass.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/IPO/InlinerPass.h
@@ -52,10 +52,11 @@ struct Inliner : public CallGraphSCCPass {
   unsigned getInlineThreshold() const { return InlineThreshold; }
 
   /// Calculate the inline threshold for given Caller. This threshold is lower
-  /// if Caller is marked with OptimizeForSize and -inline-threshold is not
-  /// given on the comand line.
+  /// if the caller is marked with OptimizeForSize and -inline-threshold is not
+  /// given on the comand line. It is higher if the callee is marked with the
+  /// inlinehint attribute.
   ///
-  unsigned getInlineThreshold(Function* Caller) const;
+  unsigned getInlineThreshold(CallSite CS) const;
 
   /// getInlineCost - This method must be implemented by the subclass to
   /// determine the cost of inlining the specified call site.  If the cost
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
index 7fbbef9..5f494fb 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Cloning.h
@@ -19,6 +19,7 @@
 #define LLVM_TRANSFORMS_UTILS_CLONING_H
 
 #include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/Twine.h"
 
 namespace llvm {
 
@@ -101,7 +102,7 @@ struct ClonedCodeInfo {
 ///
 BasicBlock *CloneBasicBlock(const BasicBlock *BB,
                             DenseMap<const Value*, Value*> &ValueMap,
-                            const char *NameSuffix = "", Function *F = 0,
+                            const Twine &NameSuffix = "", Function *F = 0,
                             ClonedCodeInfo *CodeInfo = 0);
 
 
diff --git a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
index f6d9f82..bb6fd56 100644
--- a/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
+++ b/libclamav/c++/llvm/include/llvm/Transforms/Utils/Local.h
@@ -38,7 +38,8 @@ template<typename T> class SmallVectorImpl;
 /// from this value cannot trap.  If it is not obviously safe to load from the
 /// specified pointer, we do a quick local scan of the basic block containing
 /// ScanFrom, to determine if the address is already accessed.
-bool isSafeToLoadUnconditionally(Value *V, Instruction *ScanFrom);
+bool isSafeToLoadUnconditionally(Value *V, Instruction *ScanFrom,
+                                 unsigned Align, const TargetData *TD = 0);
 
 //===----------------------------------------------------------------------===//
 //  Local constant propagation.
@@ -130,7 +131,7 @@ bool EliminateDuplicatePHINodes(BasicBlock *BB);
 ///
 /// WARNING:  The entry node of a method may not be simplified.
 ///
-bool SimplifyCFG(BasicBlock *BB);
+bool SimplifyCFG(BasicBlock *BB, const TargetData *TD = 0);
 
 /// FoldBranchToCommonDest - If this basic block is ONLY a setcc and a branch,
 /// and if a predecessor branches to us and one of our successors, fold the
diff --git a/libclamav/c++/llvm/include/llvm/Type.h b/libclamav/c++/llvm/include/llvm/Type.h
index 2c37a68..52b2c84 100644
--- a/libclamav/c++/llvm/include/llvm/Type.h
+++ b/libclamav/c++/llvm/include/llvm/Type.h
@@ -82,10 +82,11 @@ public:
     IntegerTyID,     ///<  8: Arbitrary bit width integers
     FunctionTyID,    ///<  9: Functions
     StructTyID,      ///< 10: Structures
-    ArrayTyID,       ///< 11: Arrays
-    PointerTyID,     ///< 12: Pointers
-    OpaqueTyID,      ///< 13: Opaque: type with unknown structure
-    VectorTyID,      ///< 14: SIMD 'packed' format, or other vector type
+    UnionTyID,       ///< 11: Unions
+    ArrayTyID,       ///< 12: Arrays
+    PointerTyID,     ///< 13: Pointers
+    OpaqueTyID,      ///< 14: Opaque: type with unknown structure
+    VectorTyID,      ///< 15: SIMD 'packed' format, or other vector type
 
     NumTypeIDs,                         // Must remain as last defined ID
     LastPrimitiveTyID = LabelTyID,
@@ -233,7 +234,27 @@ public:
   /// isFPOrFPVector - Return true if this is a FP type or a vector of FP types.
   ///
   bool isFPOrFPVector() const;
-  
+ 
+  /// isFunction - True if this is an instance of FunctionType.
+  ///
+  bool isFunction() const { return ID == FunctionTyID; }
+
+  /// isStruct - True if this is an instance of StructType.
+  ///
+  bool isStruct() const { return ID == StructTyID; }
+
+  /// isArray - True if this is an instance of ArrayType.
+  ///
+  bool isArray() const { return ID == ArrayTyID; }
+
+  /// isPointer - True if this is an instance of PointerType.
+  ///
+  bool isPointer() const { return ID == PointerTyID; }
+
+  /// isVector - True if this is an instance of VectorType.
+  ///
+  bool isVector() const { return ID == VectorTyID; }
+
   /// isAbstract - True if the type is either an Opaque type, or is a derived
   /// type that includes an opaque type somewhere in it.
   ///
@@ -277,7 +298,7 @@ public:
   /// does not include vector types.
   ///
   inline bool isAggregateType() const {
-    return ID == StructTyID || ID == ArrayTyID;
+    return ID == StructTyID || ID == ArrayTyID || ID == UnionTyID;
   }
 
   /// isSized - Return true if it makes sense to take the size of this type.  To
@@ -290,7 +311,8 @@ public:
       return true;
     // If it is not something that can have a size (e.g. a function or label),
     // it doesn't have a size.
-    if (ID != StructTyID && ID != ArrayTyID && ID != VectorTyID)
+    if (ID != StructTyID && ID != ArrayTyID && ID != VectorTyID &&
+        ID != UnionTyID)
       return false;
     // If it is something that can have a size and it's concrete, it definitely
     // has a size, otherwise we have to try harder to decide.
diff --git a/libclamav/c++/llvm/include/llvm/Value.h b/libclamav/c++/llvm/include/llvm/Value.h
index 9045906..d06cbc0 100644
--- a/libclamav/c++/llvm/include/llvm/Value.h
+++ b/libclamav/c++/llvm/include/llvm/Value.h
@@ -215,6 +215,7 @@ public:
     ConstantFPVal,            // This is an instance of ConstantFP
     ConstantArrayVal,         // This is an instance of ConstantArray
     ConstantStructVal,        // This is an instance of ConstantStruct
+    ConstantUnionVal,         // This is an instance of ConstantUnion
     ConstantVectorVal,        // This is an instance of ConstantVector
     ConstantPointerNullVal,   // This is an instance of ConstantPointerNull
     MDNodeVal,                // This is an instance of MDNode
diff --git a/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h b/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
index 53815ba..7497dae 100644
--- a/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
+++ b/libclamav/c++/llvm/include/llvm/ValueSymbolTable.h
@@ -17,7 +17,6 @@
 #include "llvm/Value.h"
 #include "llvm/ADT/StringMap.h"
 #include "llvm/System/DataTypes.h"
-#include "llvm/ADT/ilist_node.h"
 
 namespace llvm {
   template<typename ValueSubClass, typename ItemParentClass>
@@ -195,9 +194,15 @@ public:
 /// @name Mutators
 /// @{
 public:
-  /// insert - The method inserts a new entry into the stringmap.
+  /// insert - The method inserts a new entry into the stringmap. This will
+  /// replace existing entry, if any.
   void insert(StringRef Name,  NamedMDNode *Node) {
-    (void) mmap.GetOrCreateValue(Name, Node);
+    StringMapEntry<NamedMDNode *> &Entry = 
+      mmap.GetOrCreateValue(Name, Node);
+    if (Entry.getValue() != Node) {
+      mmap.remove(&Entry);
+      (void) mmap.GetOrCreateValue(Name, Node);
+    }
   }
   
   /// This method removes a NamedMDNode from the symbol table.  
diff --git a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
index 4ae8859..ba87040 100644
--- a/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ConstantFolding.cpp
@@ -517,6 +517,42 @@ static Constant *SymbolicallyEvaluateBinop(unsigned Opc, Constant *Op0,
   return 0;
 }
 
+/// CastGEPIndices - If array indices are not pointer-sized integers,
+/// explicitly cast them so that they aren't implicitly casted by the
+/// getelementptr.
+static Constant *CastGEPIndices(Constant *const *Ops, unsigned NumOps,
+                                const Type *ResultTy,
+                                const TargetData *TD) {
+  if (!TD) return 0;
+  const Type *IntPtrTy = TD->getIntPtrType(ResultTy->getContext());
+
+  bool Any = false;
+  SmallVector<Constant*, 32> NewIdxs;
+  for (unsigned i = 1; i != NumOps; ++i) {
+    if ((i == 1 ||
+         !isa<StructType>(GetElementPtrInst::getIndexedType(Ops[0]->getType(),
+                                                            reinterpret_cast<Value *const *>(Ops+1),
+                                                            i-1))) &&
+        Ops[i]->getType() != IntPtrTy) {
+      Any = true;
+      NewIdxs.push_back(ConstantExpr::getCast(CastInst::getCastOpcode(Ops[i],
+                                                                      true,
+                                                                      IntPtrTy,
+                                                                      true),
+                                              Ops[i], IntPtrTy));
+    } else
+      NewIdxs.push_back(Ops[i]);
+  }
+  if (!Any) return 0;
+
+  Constant *C =
+    ConstantExpr::getGetElementPtr(Ops[0], &NewIdxs[0], NewIdxs.size());
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
+    if (Constant *Folded = ConstantFoldConstantExpression(CE, TD))
+      C = Folded;
+  return C;
+}
+
 /// SymbolicallyEvaluateGEP - If we can symbolically evaluate the specified GEP
 /// constant expression, do so.
 static Constant *SymbolicallyEvaluateGEP(Constant *const *Ops, unsigned NumOps,
@@ -676,10 +712,10 @@ Constant *llvm::ConstantFoldInstruction(Instruction *I, const TargetData *TD) {
 /// ConstantFoldConstantExpression - Attempt to fold the constant expression
 /// using the specified TargetData.  If successful, the constant result is
 /// result is returned, if not, null is returned.
-Constant *llvm::ConstantFoldConstantExpression(ConstantExpr *CE,
+Constant *llvm::ConstantFoldConstantExpression(const ConstantExpr *CE,
                                                const TargetData *TD) {
   SmallVector<Constant*, 8> Ops;
-  for (User::op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i) {
+  for (User::const_op_iterator i = CE->op_begin(), e = CE->op_end(); i != e; ++i) {
     Constant *NewC = cast<Constant>(*i);
     // Recursively fold the ConstantExpr's operands.
     if (ConstantExpr *NewCE = dyn_cast<ConstantExpr>(NewC))
@@ -810,6 +846,8 @@ Constant *llvm::ConstantFoldInstOperands(unsigned Opcode, const Type *DestTy,
   case Instruction::ShuffleVector:
     return ConstantExpr::getShuffleVector(Ops[0], Ops[1], Ops[2]);
   case Instruction::GetElementPtr:
+    if (Constant *C = CastGEPIndices(Ops, NumOps, DestTy, TD))
+      return C;
     if (Constant *C = SymbolicallyEvaluateGEP(Ops, NumOps, DestTy, TD))
       return C;
     
diff --git a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
index 8bed36e..258f1db 100644
--- a/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/DebugInfo.cpp
@@ -725,6 +725,29 @@ DIBasicType DIFactory::CreateBasicTypeEx(DIDescriptor Context,
   return DIBasicType(MDNode::get(VMContext, &Elts[0], 10));
 }
 
+/// CreateArtificialType - Create a new DIType with "artificial" flag set.
+DIType DIFactory::CreateArtificialType(DIType Ty) {
+  if (Ty.isArtificial())
+    return Ty;
+
+  SmallVector<Value *, 9> Elts;
+  MDNode *N = Ty.getNode();
+  assert (N && "Unexpected input DIType!");
+  for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
+    if (Value *V = N->getOperand(i))
+      Elts.push_back(V);
+    else
+      Elts.push_back(Constant::getNullValue(Type::getInt32Ty(VMContext)));
+  }
+
+  unsigned CurFlags = Ty.getFlags();
+  CurFlags = CurFlags | DIType::FlagArtificial;
+
+  // Flags are stored at this slot.
+  Elts[8] =  ConstantInt::get(Type::getInt32Ty(VMContext), CurFlags);
+
+  return DIType(MDNode::get(VMContext, Elts.data(), Elts.size()));
+}
 
 /// CreateDerivedType - Create a derived type like const qualified type,
 /// pointer, typedef, etc.
@@ -794,7 +817,8 @@ DICompositeType DIFactory::CreateCompositeType(unsigned Tag,
                                                unsigned Flags,
                                                DIType DerivedFrom,
                                                DIArray Elements,
-                                               unsigned RuntimeLang) {
+                                               unsigned RuntimeLang,
+                                               MDNode *ContainingType) {
 
   Value *Elts[] = {
     GetTagConstant(Tag),
@@ -808,9 +832,10 @@ DICompositeType DIFactory::CreateCompositeType(unsigned Tag,
     ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
     DerivedFrom.getNode(),
     Elements.getNode(),
-    ConstantInt::get(Type::getInt32Ty(VMContext), RuntimeLang)
+    ConstantInt::get(Type::getInt32Ty(VMContext), RuntimeLang),
+    ContainingType
   };
-  return DICompositeType(MDNode::get(VMContext, &Elts[0], 12));
+  return DICompositeType(MDNode::get(VMContext, &Elts[0], 13));
 }
 
 
@@ -858,7 +883,8 @@ DISubprogram DIFactory::CreateSubprogram(DIDescriptor Context,
                                          bool isLocalToUnit,
                                          bool isDefinition,
                                          unsigned VK, unsigned VIndex,
-                                         DIType ContainingType) {
+                                         DIType ContainingType,
+                                         bool isArtificial) {
 
   Value *Elts[] = {
     GetTagConstant(dwarf::DW_TAG_subprogram),
@@ -874,9 +900,10 @@ DISubprogram DIFactory::CreateSubprogram(DIDescriptor Context,
     ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition),
     ConstantInt::get(Type::getInt32Ty(VMContext), (unsigned)VK),
     ConstantInt::get(Type::getInt32Ty(VMContext), VIndex),
-    ContainingType.getNode()
+    ContainingType.getNode(),
+    ConstantInt::get(Type::getInt1Ty(VMContext), isArtificial)
   };
-  return DISubprogram(MDNode::get(VMContext, &Elts[0], 14));
+  return DISubprogram(MDNode::get(VMContext, &Elts[0], 15));
 }
 
 /// CreateSubprogramDefinition - Create new subprogram descriptor for the
@@ -900,9 +927,10 @@ DISubprogram DIFactory::CreateSubprogramDefinition(DISubprogram &SPDeclaration)
     ConstantInt::get(Type::getInt1Ty(VMContext), true),
     DeclNode->getOperand(11), // Virtuality
     DeclNode->getOperand(12), // VIndex
-    DeclNode->getOperand(13)  // Containting Type
+    DeclNode->getOperand(13), // Containting Type
+    DeclNode->getOperand(14)  // isArtificial
   };
-  return DISubprogram(MDNode::get(VMContext, &Elts[0], 14));
+  return DISubprogram(MDNode::get(VMContext, &Elts[0], 15));
 }
 
 /// CreateGlobalVariable - Create a new descriptor for the specified global.
@@ -1053,8 +1081,13 @@ Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
 
   Value *Args[] = { MDNode::get(Storage->getContext(), &Storage, 1),
                     D.getNode() };
-  return CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);
-}
+
+  // If this block already has a terminator then insert this intrinsic
+  // before the terminator.
+  if (TerminatorInst *T = InsertAtEnd->getTerminator()) 
+    return CallInst::Create(DeclareFn, Args, Args+2, "", T);
+  else
+    return CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);}
 
 /// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
 Instruction *DIFactory::InsertDbgValueIntrinsic(Value *V, uint64_t Offset,
diff --git a/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp b/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
index e803a48..ec94bc8 100644
--- a/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IPA/GlobalsModRef.cpp
@@ -486,7 +486,7 @@ GlobalsModRef::alias(const Value *V1, unsigned V1Size,
     if (GV1 && !NonAddressTakenGlobals.count(GV1)) GV1 = 0;
     if (GV2 && !NonAddressTakenGlobals.count(GV2)) GV2 = 0;
 
-    // If the the two pointers are derived from two different non-addr-taken
+    // If the two pointers are derived from two different non-addr-taken
     // globals, or if one is and the other isn't, we know these can't alias.
     if ((GV1 || GV2) && GV1 != GV2)
       return NoAlias;
diff --git a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
index 38611cc..4ce6868 100644
--- a/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/IVUsers.cpp
@@ -21,6 +21,7 @@
 #include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/ScalarEvolutionExpressions.h"
+#include "llvm/Assembly/AsmAnnotationWriter.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
@@ -35,42 +36,30 @@ Pass *llvm::createIVUsersPass() {
   return new IVUsers();
 }
 
-/// containsAddRecFromDifferentLoop - Determine whether expression S involves a
-/// subexpression that is an AddRec from a loop other than L.  An outer loop
-/// of L is OK, but not an inner loop nor a disjoint loop.
-static bool containsAddRecFromDifferentLoop(const SCEV *S, Loop *L) {
-  // This is very common, put it first.
-  if (isa<SCEVConstant>(S))
-    return false;
-  if (const SCEVCommutativeExpr *AE = dyn_cast<SCEVCommutativeExpr>(S)) {
-    for (unsigned int i=0; i< AE->getNumOperands(); i++)
-      if (containsAddRecFromDifferentLoop(AE->getOperand(i), L))
-        return true;
-    return false;
-  }
-  if (const SCEVAddRecExpr *AE = dyn_cast<SCEVAddRecExpr>(S)) {
-    if (const Loop *newLoop = AE->getLoop()) {
-      if (newLoop == L)
-        return false;
-      // if newLoop is an outer loop of L, this is OK.
-      if (newLoop->contains(L))
-        return false;
+/// CollectSubexprs - Split S into subexpressions which can be pulled out into
+/// separate registers.
+static void CollectSubexprs(const SCEV *S,
+                            SmallVectorImpl<const SCEV *> &Ops,
+                            ScalarEvolution &SE) {
+  if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
+    // Break out add operands.
+    for (SCEVAddExpr::op_iterator I = Add->op_begin(), E = Add->op_end();
+         I != E; ++I)
+      CollectSubexprs(*I, Ops, SE);
+    return;
+  } else if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
+    // Split a non-zero base out of an addrec.
+    if (!AR->getStart()->isZero()) {
+      CollectSubexprs(AR->getStart(), Ops, SE);
+      CollectSubexprs(SE.getAddRecExpr(SE.getIntegerSCEV(0, AR->getType()),
+                                       AR->getStepRecurrence(SE),
+                                       AR->getLoop()), Ops, SE);
+      return;
     }
-    return true;
   }
-  if (const SCEVUDivExpr *DE = dyn_cast<SCEVUDivExpr>(S))
-    return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
-           containsAddRecFromDifferentLoop(DE->getRHS(), L);
-#if 0
-  // SCEVSDivExpr has been backed out temporarily, but will be back; we'll
-  // need this when it is.
-  if (const SCEVSDivExpr *DE = dyn_cast<SCEVSDivExpr>(S))
-    return containsAddRecFromDifferentLoop(DE->getLHS(), L) ||
-           containsAddRecFromDifferentLoop(DE->getRHS(), L);
-#endif
-  if (const SCEVCastExpr *CE = dyn_cast<SCEVCastExpr>(S))
-    return containsAddRecFromDifferentLoop(CE->getOperand(), L);
-  return false;
+
+  // Otherwise use the value itself.
+  Ops.push_back(S);
 }
 
 /// getSCEVStartAndStride - Compute the start and stride of this expression,
@@ -89,35 +78,42 @@ static bool getSCEVStartAndStride(const SCEV *&SH, Loop *L, Loop *UseLoop,
   if (const SCEVAddExpr *AE = dyn_cast<SCEVAddExpr>(SH)) {
     for (unsigned i = 0, e = AE->getNumOperands(); i != e; ++i)
       if (const SCEVAddRecExpr *AddRec =
-             dyn_cast<SCEVAddRecExpr>(AE->getOperand(i))) {
-        if (AddRec->getLoop() == L)
-          TheAddRec = SE->getAddExpr(AddRec, TheAddRec);
-        else
-          return false;  // Nested IV of some sort?
-      } else {
+             dyn_cast<SCEVAddRecExpr>(AE->getOperand(i)))
+        TheAddRec = SE->getAddExpr(AddRec, TheAddRec);
+      else
         Start = SE->getAddExpr(Start, AE->getOperand(i));
-      }
   } else if (isa<SCEVAddRecExpr>(SH)) {
     TheAddRec = SH;
   } else {
     return false;  // not analyzable.
   }
 
-  const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(TheAddRec);
-  if (!AddRec || AddRec->getLoop() != L) return false;
+  // Break down TheAddRec into its component parts.
+  SmallVector<const SCEV *, 4> Subexprs;
+  CollectSubexprs(TheAddRec, Subexprs, *SE);
+
+  // Look for an addrec on the current loop among the parts.
+  const SCEV *AddRecStride = 0;
+  for (SmallVectorImpl<const SCEV *>::iterator I = Subexprs.begin(),
+       E = Subexprs.end(); I != E; ++I) {
+    const SCEV *S = *I;
+    if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S))
+      if (AR->getLoop() == L) {
+        *I = AR->getStart();
+        AddRecStride = AR->getStepRecurrence(*SE);
+        break;
+      }
+  }
+  if (!AddRecStride)
+    return false;
+
+  // Add up everything else into a start value (which may not be
+  // loop-invariant).
+  const SCEV *AddRecStart = SE->getAddExpr(Subexprs);
 
   // Use getSCEVAtScope to attempt to simplify other loops out of
   // the picture.
-  const SCEV *AddRecStart = AddRec->getStart();
   AddRecStart = SE->getSCEVAtScope(AddRecStart, UseLoop);
-  const SCEV *AddRecStride = AddRec->getStepRecurrence(*SE);
-
-  // FIXME: If Start contains an SCEVAddRecExpr from a different loop, other
-  // than an outer loop of the current loop, reject it.  LSR has no concept of
-  // operating on more than one loop at a time so don't confuse it with such
-  // expressions.
-  if (containsAddRecFromDifferentLoop(AddRecStart, L))
-    return false;
 
   Start = SE->getAddExpr(Start, AddRecStart);
 
@@ -130,7 +126,7 @@ static bool getSCEVStartAndStride(const SCEV *&SH, Loop *L, Loop *UseLoop,
 
     DEBUG(dbgs() << "[";
           WriteAsOperand(dbgs(), L->getHeader(), /*PrintType=*/false);
-          dbgs() << "] Variable stride: " << *AddRec << "\n");
+          dbgs() << "] Variable stride: " << *AddRecStride << "\n");
   }
 
   Stride = AddRecStride;
@@ -246,14 +242,6 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
     }
 
     if (AddUserToIVUsers) {
-      IVUsersOfOneStride *StrideUses = IVUsesByStride[Stride];
-      if (!StrideUses) {    // First occurrence of this stride?
-        StrideOrder.push_back(Stride);
-        StrideUses = new IVUsersOfOneStride(Stride);
-        IVUses.push_back(StrideUses);
-        IVUsesByStride[Stride] = StrideUses;
-      }
-
       // Okay, we found a user that we cannot reduce.  Analyze the instruction
       // and decide what to do with it.  If we are a use inside of the loop, use
       // the value before incrementation, otherwise use it after incrementation.
@@ -261,27 +249,21 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
         // The value used will be incremented by the stride more than we are
         // expecting, so subtract this off.
         const SCEV *NewStart = SE->getMinusSCEV(Start, Stride);
-        StrideUses->addUser(NewStart, User, I);
-        StrideUses->Users.back().setIsUseOfPostIncrementedValue(true);
+        IVUses.push_back(new IVStrideUse(this, Stride, NewStart, User, I));
+        IVUses.back().setIsUseOfPostIncrementedValue(true);
         DEBUG(dbgs() << "   USING POSTINC SCEV, START=" << *NewStart<< "\n");
       } else {
-        StrideUses->addUser(Start, User, I);
+        IVUses.push_back(new IVStrideUse(this, Stride, Start, User, I));
       }
     }
   }
   return true;
 }
 
-void IVUsers::AddUser(const SCEV *Stride, const SCEV *Offset,
-                      Instruction *User, Value *Operand) {
-  IVUsersOfOneStride *StrideUses = IVUsesByStride[Stride];
-  if (!StrideUses) {    // First occurrence of this stride?
-    StrideOrder.push_back(Stride);
-    StrideUses = new IVUsersOfOneStride(Stride);
-    IVUses.push_back(StrideUses);
-    IVUsesByStride[Stride] = StrideUses;
-  }
-  IVUsesByStride[Stride]->addUser(Offset, User, Operand);
+IVStrideUse &IVUsers::AddUser(const SCEV *Stride, const SCEV *Offset,
+                              Instruction *User, Value *Operand) {
+  IVUses.push_back(new IVStrideUse(this, Stride, Offset, User, Operand));
+  return IVUses.back();
 }
 
 IVUsers::IVUsers()
@@ -315,15 +297,15 @@ bool IVUsers::runOnLoop(Loop *l, LPPassManager &LPM) {
 /// value of the OperandValToReplace of the given IVStrideUse.
 const SCEV *IVUsers::getReplacementExpr(const IVStrideUse &U) const {
   // Start with zero.
-  const SCEV *RetVal = SE->getIntegerSCEV(0, U.getParent()->Stride->getType());
+  const SCEV *RetVal = SE->getIntegerSCEV(0, U.getStride()->getType());
   // Create the basic add recurrence.
-  RetVal = SE->getAddRecExpr(RetVal, U.getParent()->Stride, L);
+  RetVal = SE->getAddRecExpr(RetVal, U.getStride(), L);
   // Add the offset in a separate step, because it may be loop-variant.
   RetVal = SE->getAddExpr(RetVal, U.getOffset());
   // For uses of post-incremented values, add an extra stride to compute
   // the actual replacement value.
   if (U.isUseOfPostIncrementedValue())
-    RetVal = SE->getAddExpr(RetVal, U.getParent()->Stride);
+    RetVal = SE->getAddExpr(RetVal, U.getStride());
   return RetVal;
 }
 
@@ -332,9 +314,9 @@ const SCEV *IVUsers::getReplacementExpr(const IVStrideUse &U) const {
 /// isUseOfPostIncrementedValue flag.
 const SCEV *IVUsers::getCanonicalExpr(const IVStrideUse &U) const {
   // Start with zero.
-  const SCEV *RetVal = SE->getIntegerSCEV(0, U.getParent()->Stride->getType());
+  const SCEV *RetVal = SE->getIntegerSCEV(0, U.getStride()->getType());
   // Create the basic add recurrence.
-  RetVal = SE->getAddRecExpr(RetVal, U.getParent()->Stride, L);
+  RetVal = SE->getAddRecExpr(RetVal, U.getStride(), L);
   // Add the offset in a separate step, because it may be loop-variant.
   RetVal = SE->getAddExpr(RetVal, U.getOffset());
   return RetVal;
@@ -349,24 +331,20 @@ void IVUsers::print(raw_ostream &OS, const Module *M) const {
   }
   OS << ":\n";
 
-  for (unsigned Stride = 0, e = StrideOrder.size(); Stride != e; ++Stride) {
-    std::map<const SCEV *, IVUsersOfOneStride*>::const_iterator SI =
-      IVUsesByStride.find(StrideOrder[Stride]);
-    assert(SI != IVUsesByStride.end() && "Stride doesn't exist!");
-    OS << "  Stride " << *SI->first->getType() << " " << *SI->first << ":\n";
-
-    for (ilist<IVStrideUse>::const_iterator UI = SI->second->Users.begin(),
-         E = SI->second->Users.end(); UI != E; ++UI) {
-      OS << "    ";
-      WriteAsOperand(OS, UI->getOperandValToReplace(), false);
-      OS << " = ";
-      OS << *getReplacementExpr(*UI);
-      if (UI->isUseOfPostIncrementedValue())
-        OS << " (post-inc)";
-      OS << " in ";
-      UI->getUser()->print(OS);
-      OS << '\n';
-    }
+  // Use a defualt AssemblyAnnotationWriter to suppress the default info
+  // comments, which aren't relevant here.
+  AssemblyAnnotationWriter Annotator;
+  for (ilist<IVStrideUse>::const_iterator UI = IVUses.begin(),
+       E = IVUses.end(); UI != E; ++UI) {
+    OS << "  ";
+    WriteAsOperand(OS, UI->getOperandValToReplace(), false);
+    OS << " = "
+       << *getReplacementExpr(*UI);
+    if (UI->isUseOfPostIncrementedValue())
+      OS << " (post-inc)";
+    OS << " in  ";
+    UI->getUser()->print(OS, &Annotator);
+    OS << '\n';
   }
 }
 
@@ -375,14 +353,12 @@ void IVUsers::dump() const {
 }
 
 void IVUsers::releaseMemory() {
-  IVUsesByStride.clear();
-  StrideOrder.clear();
   Processed.clear();
   IVUses.clear();
 }
 
 void IVStrideUse::deleted() {
   // Remove this user from the list.
-  Parent->Users.erase(this);
+  Parent->IVUses.erase(this);
   // this now dangles!
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp b/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp
index 651c918..972d034 100644
--- a/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/InlineCost.cpp
@@ -25,26 +25,28 @@ unsigned InlineCostAnalyzer::FunctionInfo::
          CountCodeReductionForConstant(Value *V) {
   unsigned Reduction = 0;
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E; ++UI)
-    if (isa<BranchInst>(*UI))
-      Reduction += 40;          // Eliminating a conditional branch is a big win
-    else if (SwitchInst *SI = dyn_cast<SwitchInst>(*UI))
-      // Eliminating a switch is a big win, proportional to the number of edges
-      // deleted.
-      Reduction += (SI->getNumSuccessors()-1) * 40;
-    else if (isa<IndirectBrInst>(*UI))
-      // Eliminating an indirect branch is a big win.
-      Reduction += 200;
-    else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
+    if (isa<BranchInst>(*UI) || isa<SwitchInst>(*UI)) {
+      // We will be able to eliminate all but one of the successors.
+      const TerminatorInst &TI = cast<TerminatorInst>(**UI);
+      const unsigned NumSucc = TI.getNumSuccessors();
+      unsigned Instrs = 0;
+      for (unsigned I = 0; I != NumSucc; ++I)
+        Instrs += TI.getSuccessor(I)->size();
+      // We don't know which blocks will be eliminated, so use the average size.
+      Reduction += InlineConstants::InstrCost*Instrs*(NumSucc-1)/NumSucc;
+    } else if (CallInst *CI = dyn_cast<CallInst>(*UI)) {
       // Turning an indirect call into a direct call is a BIG win
-      Reduction += CI->getCalledValue() == V ? 500 : 0;
+      if (CI->getCalledValue() == V)
+        Reduction += InlineConstants::IndirectCallBonus;
     } else if (InvokeInst *II = dyn_cast<InvokeInst>(*UI)) {
       // Turning an indirect call into a direct call is a BIG win
-      Reduction += II->getCalledValue() == V ? 500 : 0;
+      if (II->getCalledValue() == V)
+        Reduction += InlineConstants::IndirectCallBonus;
     } else {
       // Figure out if this instruction will be removed due to simple constant
       // propagation.
       Instruction &Inst = cast<Instruction>(**UI);
-      
+
       // We can't constant propagate instructions which have effects or
       // read memory.
       //
@@ -53,7 +55,7 @@ unsigned InlineCostAnalyzer::FunctionInfo::
       // Unfortunately, we don't know the pointer that may get propagated here,
       // so we can't make this decision.
       if (Inst.mayReadFromMemory() || Inst.mayHaveSideEffects() ||
-          isa<AllocaInst>(Inst)) 
+          isa<AllocaInst>(Inst))
         continue;
 
       bool AllOperandsConstant = true;
@@ -65,7 +67,7 @@ unsigned InlineCostAnalyzer::FunctionInfo::
 
       if (AllOperandsConstant) {
         // We will get to remove this instruction...
-        Reduction += 7;
+        Reduction += InlineConstants::InstrCost;
 
         // And any other instructions that use it which become constants
         // themselves.
@@ -87,11 +89,14 @@ unsigned InlineCostAnalyzer::FunctionInfo::
   for (Value::use_iterator UI = V->use_begin(), E = V->use_end(); UI != E;++UI){
     Instruction *I = cast<Instruction>(*UI);
     if (isa<LoadInst>(I) || isa<StoreInst>(I))
-      Reduction += 10;
+      Reduction += InlineConstants::InstrCost;
     else if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(I)) {
       // If the GEP has variable indices, we won't be able to do much with it.
-      if (!GEP->hasAllConstantIndices())
-        Reduction += CountCodeReductionForAlloca(GEP)+15;
+      if (GEP->hasAllConstantIndices())
+        Reduction += CountCodeReductionForAlloca(GEP);
+    } else if (BitCastInst *BCI = dyn_cast<BitCastInst>(I)) {
+      // Track pointer through bitcasts.
+      Reduction += CountCodeReductionForAlloca(BCI);
     } else {
       // If there is some other strange instruction, we're not going to be able
       // to do much if we inline this.
@@ -158,10 +163,11 @@ void CodeMetrics::analyzeBasicBlock(const BasicBlock *BB) {
             (F->getName() == "setjmp" || F->getName() == "_setjmp"))
           NeverInline = true;
 
-      // Calls often compile into many machine instructions.  Bump up their
-      // cost to reflect this.
-      if (!isa<IntrinsicInst>(II) && !callIsSmall(CS.getCalledFunction()))
-        NumInsts += InlineConstants::CallPenalty;
+      if (!isa<IntrinsicInst>(II) && !callIsSmall(CS.getCalledFunction())) {
+        // Each argument to a call takes on average one instruction to set up.
+        NumInsts += CS.arg_size();
+        ++NumCalls;
+      }
     }
     
     if (const AllocaInst *AI = dyn_cast<AllocaInst>(II)) {
@@ -223,8 +229,14 @@ void InlineCostAnalyzer::FunctionInfo::analyzeFunction(Function *F) {
   if (Metrics.NumRets==1)
     --Metrics.NumInsts;
 
+  // Don't bother calculating argument weights if we are never going to inline
+  // the function anyway.
+  if (Metrics.NeverInline)
+    return;
+
   // Check out all of the arguments to the function, figuring out how much
   // code can be eliminated if one of the arguments is a constant.
+  ArgumentWeights.reserve(F->arg_size());
   for (Function::arg_iterator I = F->arg_begin(), E = F->arg_end(); I != E; ++I)
     ArgumentWeights.push_back(ArgInfo(CountCodeReductionForConstant(I),
                                       CountCodeReductionForAlloca(I)));
@@ -313,23 +325,18 @@ InlineCost InlineCostAnalyzer::getInlineCost(CallSite CS,
   for (CallSite::arg_iterator I = CS.arg_begin(), E = CS.arg_end();
        I != E; ++I, ++ArgNo) {
     // Each argument passed in has a cost at both the caller and the callee
-    // sides.  This favors functions that take many arguments over functions
-    // that take few arguments.
-    InlineCost -= 20;
-    
-    // If this is a function being passed in, it is very likely that we will be
-    // able to turn an indirect function call into a direct function call.
-    if (isa<Function>(I))
-      InlineCost -= 100;
-    
+    // sides.  Measurements show that each argument costs about the same as an
+    // instruction.
+    InlineCost -= InlineConstants::InstrCost;
+
     // If an alloca is passed in, inlining this function is likely to allow
     // significant future optimization possibilities (like scalar promotion, and
     // scalarization), so encourage the inlining of the function.
     //
-    else if (isa<AllocaInst>(I)) {
+    if (isa<AllocaInst>(I)) {
       if (ArgNo < CalleeFI.ArgumentWeights.size())
         InlineCost -= CalleeFI.ArgumentWeights[ArgNo].AllocaWeight;
-      
+
       // If this is a constant being passed into the function, use the argument
       // weights calculated for the callee to determine how much will be folded
       // away with this information.
@@ -341,14 +348,17 @@ InlineCost InlineCostAnalyzer::getInlineCost(CallSite CS,
   
   // Now that we have considered all of the factors that make the call site more
   // likely to be inlined, look at factors that make us not want to inline it.
-  
+
+  // Calls usually take a long time, so they make the inlining gain smaller.
+  InlineCost += CalleeFI.Metrics.NumCalls * InlineConstants::CallPenalty;
+
   // Don't inline into something too big, which would make it bigger.
   // "size" here is the number of basic blocks, not instructions.
   //
   InlineCost += Caller->size()/15;
   
   // Look at the size of the callee. Each instruction counts as 5.
-  InlineCost += CalleeFI.Metrics.NumInsts*5;
+  InlineCost += CalleeFI.Metrics.NumInsts*InlineConstants::InstrCost;
 
   return llvm::InlineCost::get(InlineCost);
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp b/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
index 02ec7d3..1b91d93 100644
--- a/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/LiveValues.cpp
@@ -184,7 +184,7 @@ LiveValues::Memo &LiveValues::compute(const Value *V) {
     }
   }
 
-  // If the value was never used outside the the block in which it was
+  // If the value was never used outside the block in which it was
   // defined, it's killed in that block.
   if (!LiveOutOfDefBB)
     M.Killed.insert(DefBB);
diff --git a/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp b/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp
index b448628..297b588 100644
--- a/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/MemoryBuiltins.cpp
@@ -24,7 +24,7 @@ using namespace llvm;
 //  malloc Call Utility Functions.
 //
 
-/// isMalloc - Returns true if the the value is either a malloc call or a
+/// isMalloc - Returns true if the value is either a malloc call or a
 /// bitcast of the result of a malloc call.
 bool llvm::isMalloc(const Value *I) {
   return extractMallocCall(I) || extractMallocCallFromBitCast(I);
@@ -183,7 +183,7 @@ Value *llvm::getMallocArraySize(CallInst *CI, const TargetData *TD,
 //  free Call Utility Functions.
 //
 
-/// isFreeCall - Returns true if the the value is a call to the builtin free()
+/// isFreeCall - Returns true if the value is a call to the builtin free()
 bool llvm::isFreeCall(const Value *I) {
   const CallInst *CI = dyn_cast<CallInst>(I);
   if (!CI)
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
index b875543..1ed9d07 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -312,6 +312,21 @@ bool SCEVAddRecExpr::isLoopInvariant(const Loop *QueryLoop) const {
   return true;
 }
 
+bool
+SCEVAddRecExpr::dominates(BasicBlock *BB, DominatorTree *DT) const {
+  return DT->dominates(L->getHeader(), BB) &&
+         SCEVNAryExpr::dominates(BB, DT);
+}
+
+bool
+SCEVAddRecExpr::properlyDominates(BasicBlock *BB, DominatorTree *DT) const {
+  // This uses a "dominates" query instead of "properly dominates" query because
+  // the instruction which produces the addrec's value is a PHI, and a PHI
+  // effectively properly dominates its entire containing block.
+  return DT->dominates(L->getHeader(), BB) &&
+         SCEVNAryExpr::properlyDominates(BB, DT);
+}
+
 void SCEVAddRecExpr::print(raw_ostream &OS) const {
   OS << "{" << *Operands[0];
   for (unsigned i = 1, e = Operands.size(); i != e; ++i)
@@ -321,15 +336,6 @@ void SCEVAddRecExpr::print(raw_ostream &OS) const {
   OS << ">";
 }
 
-void SCEVFieldOffsetExpr::print(raw_ostream &OS) const {
-  // LLVM struct fields don't have names, so just print the field number.
-  OS << "offsetof(" << *STy << ", " << FieldNo << ")";
-}
-
-void SCEVAllocSizeExpr::print(raw_ostream &OS) const {
-  OS << "sizeof(" << *AllocTy << ")";
-}
-
 bool SCEVUnknown::isLoopInvariant(const Loop *L) const {
   // All non-instruction values are loop invariant.  All instructions are loop
   // invariant if they are not contained in the specified loop.
@@ -356,7 +362,91 @@ const Type *SCEVUnknown::getType() const {
   return V->getType();
 }
 
+bool SCEVUnknown::isSizeOf(const Type *&AllocTy) const {
+  if (ConstantExpr *VCE = dyn_cast<ConstantExpr>(V))
+    if (VCE->getOpcode() == Instruction::PtrToInt)
+      if (ConstantExpr *CE = dyn_cast<ConstantExpr>(VCE->getOperand(0)))
+        if (CE->getOpcode() == Instruction::GetElementPtr &&
+            CE->getOperand(0)->isNullValue() &&
+            CE->getNumOperands() == 2)
+          if (ConstantInt *CI = dyn_cast<ConstantInt>(CE->getOperand(1)))
+            if (CI->isOne()) {
+              AllocTy = cast<PointerType>(CE->getOperand(0)->getType())
+                                 ->getElementType();
+              return true;
+            }
+
+  return false;
+}
+
+bool SCEVUnknown::isAlignOf(const Type *&AllocTy) const {
+  if (ConstantExpr *VCE = dyn_cast<ConstantExpr>(V))
+    if (VCE->getOpcode() == Instruction::PtrToInt)
+      if (ConstantExpr *CE = dyn_cast<ConstantExpr>(VCE->getOperand(0)))
+        if (CE->getOpcode() == Instruction::GetElementPtr &&
+            CE->getOperand(0)->isNullValue()) {
+          const Type *Ty =
+            cast<PointerType>(CE->getOperand(0)->getType())->getElementType();
+          if (const StructType *STy = dyn_cast<StructType>(Ty))
+            if (!STy->isPacked() &&
+                CE->getNumOperands() == 3 &&
+                CE->getOperand(1)->isNullValue()) {
+              if (ConstantInt *CI = dyn_cast<ConstantInt>(CE->getOperand(2)))
+                if (CI->isOne() &&
+                    STy->getNumElements() == 2 &&
+                    STy->getElementType(0)->isInteger(1)) {
+                  AllocTy = STy->getElementType(1);
+                  return true;
+                }
+            }
+        }
+
+  return false;
+}
+
+bool SCEVUnknown::isOffsetOf(const Type *&CTy, Constant *&FieldNo) const {
+  if (ConstantExpr *VCE = dyn_cast<ConstantExpr>(V))
+    if (VCE->getOpcode() == Instruction::PtrToInt)
+      if (ConstantExpr *CE = dyn_cast<ConstantExpr>(VCE->getOperand(0)))
+        if (CE->getOpcode() == Instruction::GetElementPtr &&
+            CE->getNumOperands() == 3 &&
+            CE->getOperand(0)->isNullValue() &&
+            CE->getOperand(1)->isNullValue()) {
+          const Type *Ty =
+            cast<PointerType>(CE->getOperand(0)->getType())->getElementType();
+          // Ignore vector types here so that ScalarEvolutionExpander doesn't
+          // emit getelementptrs that index into vectors.
+          if (isa<StructType>(Ty) || isa<ArrayType>(Ty)) {
+            CTy = Ty;
+            FieldNo = CE->getOperand(2);
+            return true;
+          }
+        }
+
+  return false;
+}
+
 void SCEVUnknown::print(raw_ostream &OS) const {
+  const Type *AllocTy;
+  if (isSizeOf(AllocTy)) {
+    OS << "sizeof(" << *AllocTy << ")";
+    return;
+  }
+  if (isAlignOf(AllocTy)) {
+    OS << "alignof(" << *AllocTy << ")";
+    return;
+  }
+
+  const Type *CTy;
+  Constant *FieldNo;
+  if (isOffsetOf(CTy, FieldNo)) {
+    OS << "offsetof(" << *CTy << ", ";
+    WriteAsOperand(OS, FieldNo, false);
+    OS << ")";
+    return;
+  }
+
+  // Otherwise just print it normally.
   WriteAsOperand(OS, V, false);
 }
 
@@ -515,21 +605,6 @@ namespace {
         return operator()(LC->getOperand(), RC->getOperand());
       }
 
-      // Compare offsetof expressions.
-      if (const SCEVFieldOffsetExpr *LA = dyn_cast<SCEVFieldOffsetExpr>(LHS)) {
-        const SCEVFieldOffsetExpr *RA = cast<SCEVFieldOffsetExpr>(RHS);
-        if (CompareTypes(LA->getStructType(), RA->getStructType()) ||
-            CompareTypes(RA->getStructType(), LA->getStructType()))
-          return CompareTypes(LA->getStructType(), RA->getStructType());
-        return LA->getFieldNo() < RA->getFieldNo();
-      }
-
-      // Compare sizeof expressions by the allocation type.
-      if (const SCEVAllocSizeExpr *LA = dyn_cast<SCEVAllocSizeExpr>(LHS)) {
-        const SCEVAllocSizeExpr *RA = cast<SCEVAllocSizeExpr>(RHS);
-        return CompareTypes(LA->getAllocType(), RA->getAllocType());
-      }
-
       llvm_unreachable("Unknown SCEV kind!");
       return false;
     }
@@ -2172,74 +2247,38 @@ const SCEV *ScalarEvolution::getUMinExpr(const SCEV *LHS,
   return getNotSCEV(getUMaxExpr(getNotSCEV(LHS), getNotSCEV(RHS)));
 }
 
-const SCEV *ScalarEvolution::getFieldOffsetExpr(const StructType *STy,
-                                                unsigned FieldNo) {
-  // If we have TargetData we can determine the constant offset.
-  if (TD) {
-    const Type *IntPtrTy = TD->getIntPtrType(getContext());
-    const StructLayout &SL = *TD->getStructLayout(STy);
-    uint64_t Offset = SL.getElementOffset(FieldNo);
-    return getIntegerSCEV(Offset, IntPtrTy);
-  }
+const SCEV *ScalarEvolution::getSizeOfExpr(const Type *AllocTy) {
+  Constant *C = ConstantExpr::getSizeOf(AllocTy);
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
+    C = ConstantFoldConstantExpression(CE, TD);
+  const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(AllocTy));
+  return getTruncateOrZeroExtend(getSCEV(C), Ty);
+}
 
-  // Field 0 is always at offset 0.
-  if (FieldNo == 0) {
-    const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(STy));
-    return getIntegerSCEV(0, Ty);
-  }
+const SCEV *ScalarEvolution::getAlignOfExpr(const Type *AllocTy) {
+  Constant *C = ConstantExpr::getAlignOf(AllocTy);
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
+    C = ConstantFoldConstantExpression(CE, TD);
+  const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(AllocTy));
+  return getTruncateOrZeroExtend(getSCEV(C), Ty);
+}
 
-  // Okay, it looks like we really DO need an offsetof expr.  Check to see if we
-  // already have one, otherwise create a new one.
-  FoldingSetNodeID ID;
-  ID.AddInteger(scFieldOffset);
-  ID.AddPointer(STy);
-  ID.AddInteger(FieldNo);
-  void *IP = 0;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = SCEVAllocator.Allocate<SCEVFieldOffsetExpr>();
+const SCEV *ScalarEvolution::getOffsetOfExpr(const StructType *STy,
+                                             unsigned FieldNo) {
+  Constant *C = ConstantExpr::getOffsetOf(STy, FieldNo);
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
+    C = ConstantFoldConstantExpression(CE, TD);
   const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(STy));
-  new (S) SCEVFieldOffsetExpr(ID, Ty, STy, FieldNo);
-  UniqueSCEVs.InsertNode(S, IP);
-  return S;
+  return getTruncateOrZeroExtend(getSCEV(C), Ty);
 }
 
-const SCEV *ScalarEvolution::getAllocSizeExpr(const Type *AllocTy) {
-  // If we have TargetData we can determine the constant size.
-  if (TD && AllocTy->isSized()) {
-    const Type *IntPtrTy = TD->getIntPtrType(getContext());
-    return getIntegerSCEV(TD->getTypeAllocSize(AllocTy), IntPtrTy);
-  }
-
-  // Expand an array size into the element size times the number
-  // of elements.
-  if (const ArrayType *ATy = dyn_cast<ArrayType>(AllocTy)) {
-    const SCEV *E = getAllocSizeExpr(ATy->getElementType());
-    return getMulExpr(
-      E, getConstant(ConstantInt::get(cast<IntegerType>(E->getType()),
-                                      ATy->getNumElements())));
-  }
-
-  // Expand a vector size into the element size times the number
-  // of elements.
-  if (const VectorType *VTy = dyn_cast<VectorType>(AllocTy)) {
-    const SCEV *E = getAllocSizeExpr(VTy->getElementType());
-    return getMulExpr(
-      E, getConstant(ConstantInt::get(cast<IntegerType>(E->getType()),
-                                      VTy->getNumElements())));
-  }
-
-  // Okay, it looks like we really DO need a sizeof expr.  Check to see if we
-  // already have one, otherwise create a new one.
-  FoldingSetNodeID ID;
-  ID.AddInteger(scAllocSize);
-  ID.AddPointer(AllocTy);
-  void *IP = 0;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = SCEVAllocator.Allocate<SCEVAllocSizeExpr>();
-  const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(AllocTy));
-  new (S) SCEVAllocSizeExpr(ID, Ty, AllocTy);
-  UniqueSCEVs.InsertNode(S, IP);
-  return S;
+const SCEV *ScalarEvolution::getOffsetOfExpr(const Type *CTy,
+                                             Constant *FieldNo) {
+  Constant *C = ConstantExpr::getOffsetOf(CTy, FieldNo);
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(C))
+    C = ConstantFoldConstantExpression(CE, TD);
+  const Type *Ty = getEffectiveSCEVType(PointerType::getUnqual(CTy));
+  return getTruncateOrZeroExtend(getSCEV(C), Ty);
 }
 
 const SCEV *ScalarEvolution::getUnknown(Value *V) {
@@ -2327,7 +2366,7 @@ const SCEV *ScalarEvolution::getSCEV(Value *V) {
 
 /// getIntegerSCEV - Given a SCEVable type, create a constant for the
 /// specified signed integer value and return a SCEV for the constant.
-const SCEV *ScalarEvolution::getIntegerSCEV(int Val, const Type *Ty) {
+const SCEV *ScalarEvolution::getIntegerSCEV(int64_t Val, const Type *Ty) {
   const IntegerType *ITy = cast<IntegerType>(getEffectiveSCEVType(Ty));
   return getConstant(ConstantInt::get(ITy, Val));
 }
@@ -2527,7 +2566,7 @@ ScalarEvolution::ForgetSymbolicName(Instruction *I, const SCEV *SymName) {
     if (It != Scalars.end()) {
       // Short-circuit the def-use traversal if the symbolic name
       // ceases to appear in expressions.
-      if (!It->second->hasOperand(SymName))
+      if (It->second != SymName && !It->second->hasOperand(SymName))
         continue;
 
       // SCEVUnknown for a PHI either means that it has an unrecognized
@@ -2689,16 +2728,15 @@ const SCEV *ScalarEvolution::createNodeForGEP(GEPOperator *GEP) {
       // For a struct, add the member offset.
       unsigned FieldNo = cast<ConstantInt>(Index)->getZExtValue();
       TotalOffset = getAddExpr(TotalOffset,
-                               getFieldOffsetExpr(STy, FieldNo),
+                               getOffsetOfExpr(STy, FieldNo),
                                /*HasNUW=*/false, /*HasNSW=*/InBounds);
     } else {
       // For an array, add the element offset, explicitly scaled.
       const SCEV *LocalOffset = getSCEV(Index);
-      if (!isa<PointerType>(LocalOffset->getType()))
-        // Getelementptr indicies are signed.
-        LocalOffset = getTruncateOrSignExtend(LocalOffset, IntPtrTy);
+      // Getelementptr indicies are signed.
+      LocalOffset = getTruncateOrSignExtend(LocalOffset, IntPtrTy);
       // Lower "inbounds" GEPs to NSW arithmetic.
-      LocalOffset = getMulExpr(LocalOffset, getAllocSizeExpr(*GTI),
+      LocalOffset = getMulExpr(LocalOffset, getSizeOfExpr(*GTI),
                                /*HasNUW=*/false, /*HasNSW=*/InBounds);
       TotalOffset = getAddExpr(TotalOffset, LocalOffset,
                                /*HasNUW=*/false, /*HasNSW=*/InBounds);
@@ -2797,62 +2835,67 @@ ScalarEvolution::getUnsignedRange(const SCEV *S) {
   if (const SCEVConstant *C = dyn_cast<SCEVConstant>(S))
     return ConstantRange(C->getValue()->getValue());
 
+  unsigned BitWidth = getTypeSizeInBits(S->getType());
+  ConstantRange ConservativeResult(BitWidth, /*isFullSet=*/true);
+
+  // If the value has known zeros, the maximum unsigned value will have those
+  // known zeros as well.
+  uint32_t TZ = GetMinTrailingZeros(S);
+  if (TZ != 0)
+    ConservativeResult =
+      ConstantRange(APInt::getMinValue(BitWidth),
+                    APInt::getMaxValue(BitWidth).lshr(TZ).shl(TZ) + 1);
+
   if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
     ConstantRange X = getUnsignedRange(Add->getOperand(0));
     for (unsigned i = 1, e = Add->getNumOperands(); i != e; ++i)
       X = X.add(getUnsignedRange(Add->getOperand(i)));
-    return X;
+    return ConservativeResult.intersectWith(X);
   }
 
   if (const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(S)) {
     ConstantRange X = getUnsignedRange(Mul->getOperand(0));
     for (unsigned i = 1, e = Mul->getNumOperands(); i != e; ++i)
       X = X.multiply(getUnsignedRange(Mul->getOperand(i)));
-    return X;
+    return ConservativeResult.intersectWith(X);
   }
 
   if (const SCEVSMaxExpr *SMax = dyn_cast<SCEVSMaxExpr>(S)) {
     ConstantRange X = getUnsignedRange(SMax->getOperand(0));
     for (unsigned i = 1, e = SMax->getNumOperands(); i != e; ++i)
       X = X.smax(getUnsignedRange(SMax->getOperand(i)));
-    return X;
+    return ConservativeResult.intersectWith(X);
   }
 
   if (const SCEVUMaxExpr *UMax = dyn_cast<SCEVUMaxExpr>(S)) {
     ConstantRange X = getUnsignedRange(UMax->getOperand(0));
     for (unsigned i = 1, e = UMax->getNumOperands(); i != e; ++i)
       X = X.umax(getUnsignedRange(UMax->getOperand(i)));
-    return X;
+    return ConservativeResult.intersectWith(X);
   }
 
   if (const SCEVUDivExpr *UDiv = dyn_cast<SCEVUDivExpr>(S)) {
     ConstantRange X = getUnsignedRange(UDiv->getLHS());
     ConstantRange Y = getUnsignedRange(UDiv->getRHS());
-    return X.udiv(Y);
+    return ConservativeResult.intersectWith(X.udiv(Y));
   }
 
   if (const SCEVZeroExtendExpr *ZExt = dyn_cast<SCEVZeroExtendExpr>(S)) {
     ConstantRange X = getUnsignedRange(ZExt->getOperand());
-    return X.zeroExtend(cast<IntegerType>(ZExt->getType())->getBitWidth());
+    return ConservativeResult.intersectWith(X.zeroExtend(BitWidth));
   }
 
   if (const SCEVSignExtendExpr *SExt = dyn_cast<SCEVSignExtendExpr>(S)) {
     ConstantRange X = getUnsignedRange(SExt->getOperand());
-    return X.signExtend(cast<IntegerType>(SExt->getType())->getBitWidth());
+    return ConservativeResult.intersectWith(X.signExtend(BitWidth));
   }
 
   if (const SCEVTruncateExpr *Trunc = dyn_cast<SCEVTruncateExpr>(S)) {
     ConstantRange X = getUnsignedRange(Trunc->getOperand());
-    return X.truncate(cast<IntegerType>(Trunc->getType())->getBitWidth());
+    return ConservativeResult.intersectWith(X.truncate(BitWidth));
   }
 
-  ConstantRange FullSet(getTypeSizeInBits(S->getType()), true);
-
   if (const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(S)) {
-    const SCEV *T = getBackedgeTakenCount(AddRec->getLoop());
-    const SCEVConstant *Trip = dyn_cast<SCEVConstant>(T);
-    ConstantRange ConservativeResult = FullSet;
-
     // If there's no unsigned wrap, the value will never be less than its
     // initial value.
     if (AddRec->hasNoUnsignedWrap())
@@ -2862,10 +2905,11 @@ ScalarEvolution::getUnsignedRange(const SCEV *S) {
                         APInt(getTypeSizeInBits(C->getType()), 0));
 
     // TODO: non-affine addrec
-    if (Trip && AddRec->isAffine()) {
+    if (AddRec->isAffine()) {
       const Type *Ty = AddRec->getType();
       const SCEV *MaxBECount = getMaxBackedgeTakenCount(AddRec->getLoop());
-      if (getTypeSizeInBits(MaxBECount->getType()) <= getTypeSizeInBits(Ty)) {
+      if (!isa<SCEVCouldNotCompute>(MaxBECount) &&
+          getTypeSizeInBits(MaxBECount->getType()) <= BitWidth) {
         MaxBECount = getNoopOrZeroExtend(MaxBECount, Ty);
 
         const SCEV *Start = AddRec->getStart();
@@ -2883,7 +2927,7 @@ ScalarEvolution::getUnsignedRange(const SCEV *S) {
                                    EndRange.getUnsignedMax());
         if (Min.isMinValue() && Max.isMaxValue())
           return ConservativeResult;
-        return ConstantRange(Min, Max+1);
+        return ConservativeResult.intersectWith(ConstantRange(Min, Max+1));
       }
     }
 
@@ -2897,11 +2941,11 @@ ScalarEvolution::getUnsignedRange(const SCEV *S) {
     APInt Zeros(BitWidth, 0), Ones(BitWidth, 0);
     ComputeMaskedBits(U->getValue(), Mask, Zeros, Ones, TD);
     if (Ones == ~Zeros + 1)
-      return FullSet;
-    return ConstantRange(Ones, ~Zeros + 1);
+      return ConservativeResult;
+    return ConservativeResult.intersectWith(ConstantRange(Ones, ~Zeros + 1));
   }
 
-  return FullSet;
+  return ConservativeResult;
 }
 
 /// getSignedRange - Determine the signed range for a particular SCEV.
@@ -2973,9 +3017,6 @@ ScalarEvolution::getSignedRange(const SCEV *S) {
   }
 
   if (const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(S)) {
-    const SCEV *T = getBackedgeTakenCount(AddRec->getLoop());
-    const SCEVConstant *Trip = dyn_cast<SCEVConstant>(T);
-
     // If there's no signed wrap, and all the operands have the same sign or
     // zero, the value won't ever change sign.
     if (AddRec->hasNoSignedWrap()) {
@@ -2996,10 +3037,11 @@ ScalarEvolution::getSignedRange(const SCEV *S) {
     }
 
     // TODO: non-affine addrec
-    if (Trip && AddRec->isAffine()) {
+    if (AddRec->isAffine()) {
       const Type *Ty = AddRec->getType();
       const SCEV *MaxBECount = getMaxBackedgeTakenCount(AddRec->getLoop());
-      if (getTypeSizeInBits(MaxBECount->getType()) <= BitWidth) {
+      if (!isa<SCEVCouldNotCompute>(MaxBECount) &&
+          getTypeSizeInBits(MaxBECount->getType()) <= BitWidth) {
         MaxBECount = getNoopOrZeroExtend(MaxBECount, Ty);
 
         const SCEV *Start = AddRec->getStart();
@@ -3187,7 +3229,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
   case Instruction::Shl:
     // Turn shift left of a constant amount into a multiply.
     if (ConstantInt *SA = dyn_cast<ConstantInt>(U->getOperand(1))) {
-      uint32_t BitWidth = cast<IntegerType>(V->getType())->getBitWidth();
+      uint32_t BitWidth = cast<IntegerType>(U->getType())->getBitWidth();
       Constant *X = ConstantInt::get(getContext(),
         APInt(BitWidth, 1).shl(SA->getLimitedValue(BitWidth)));
       return getMulExpr(getSCEV(U->getOperand(0)), getSCEV(X));
@@ -3197,7 +3239,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
   case Instruction::LShr:
     // Turn logical shift right of a constant into a unsigned divide.
     if (ConstantInt *SA = dyn_cast<ConstantInt>(U->getOperand(1))) {
-      uint32_t BitWidth = cast<IntegerType>(V->getType())->getBitWidth();
+      uint32_t BitWidth = cast<IntegerType>(U->getType())->getBitWidth();
       Constant *X = ConstantInt::get(getContext(),
         APInt(BitWidth, 1).shl(SA->getLimitedValue(BitWidth)));
       return getUDivExpr(getSCEV(U->getOperand(0)), getSCEV(X));
@@ -3238,10 +3280,10 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
       return getSCEV(U->getOperand(0));
     break;
 
-    // It's tempting to handle inttoptr and ptrtoint, however this can
-    // lead to pointer expressions which cannot be expanded to GEPs
-    // (because they may overflow). For now, the only pointer-typed
-    // expressions we handle are GEPs and address literals.
+  // It's tempting to handle inttoptr and ptrtoint as no-ops, however this can
+  // lead to pointer expressions which cannot safely be expanded to GEPs,
+  // because ScalarEvolution doesn't respect the GEP aliasing rules when
+  // simplifying integer expressions.
 
   case Instruction::GetElementPtr:
     return createNodeForGEP(cast<GEPOperator>(U));
@@ -3358,19 +3400,19 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
   std::pair<std::map<const Loop *, BackedgeTakenInfo>::iterator, bool> Pair =
     BackedgeTakenCounts.insert(std::make_pair(L, getCouldNotCompute()));
   if (Pair.second) {
-    BackedgeTakenInfo ItCount = ComputeBackedgeTakenCount(L);
-    if (ItCount.Exact != getCouldNotCompute()) {
-      assert(ItCount.Exact->isLoopInvariant(L) &&
-             ItCount.Max->isLoopInvariant(L) &&
-             "Computed trip count isn't loop invariant for loop!");
+    BackedgeTakenInfo BECount = ComputeBackedgeTakenCount(L);
+    if (BECount.Exact != getCouldNotCompute()) {
+      assert(BECount.Exact->isLoopInvariant(L) &&
+             BECount.Max->isLoopInvariant(L) &&
+             "Computed backedge-taken count isn't loop invariant for loop!");
       ++NumTripCountsComputed;
 
       // Update the value in the map.
-      Pair.first->second = ItCount;
+      Pair.first->second = BECount;
     } else {
-      if (ItCount.Max != getCouldNotCompute())
+      if (BECount.Max != getCouldNotCompute())
         // Update the value in the map.
-        Pair.first->second = ItCount;
+        Pair.first->second = BECount;
       if (isa<PHINode>(L->getHeader()->begin()))
         // Only count loops that have phi nodes as not being computable.
         ++NumTripCountsNotComputed;
@@ -3381,7 +3423,7 @@ ScalarEvolution::getBackedgeTakenInfo(const Loop *L) {
     // conservative estimates made without the benefit of trip count
     // information. This is similar to the code in forgetLoop, except that
     // it handles SCEVUnknown PHI nodes specially.
-    if (ItCount.hasAnyInfo()) {
+    if (BECount.hasAnyInfo()) {
       SmallVector<Instruction *, 16> Worklist;
       PushLoopPHIs(L, Worklist);
 
@@ -4238,9 +4280,6 @@ const SCEV *ScalarEvolution::computeSCEVAtScope(const SCEV *V, const Loop *L) {
     return getTruncateExpr(Op, Cast->getType());
   }
 
-  if (isa<SCEVTargetDataConstant>(V))
-    return V;
-
   llvm_unreachable("Unknown SCEV type!");
   return 0;
 }
@@ -5008,12 +5047,12 @@ ScalarEvolution::HowManyLessThans(const SCEV *LHS, const SCEV *RHS,
     if (Step->isOne()) {
       // With unit stride, the iteration never steps past the limit value.
     } else if (isKnownPositive(Step)) {
-      // Test whether a positive iteration iteration can step past the limit
+      // Test whether a positive iteration can step past the limit
       // value and past the maximum value for its type in a single step.
       // Note that it's not sufficient to check NoWrap here, because even
       // though the value after a wrap is undefined, it's not undefined
       // behavior, so if wrap does occur, the loop could either terminate or
-      // loop infinately, but in either case, the loop is guaranteed to
+      // loop infinitely, but in either case, the loop is guaranteed to
       // iterate at least until the iteration where the wrapping occurs.
       const SCEV *One = getIntegerSCEV(1, Step->getType());
       if (isSigned) {
diff --git a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
index a72f58f..15384c1 100644
--- a/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ScalarEvolutionExpander.cpp
@@ -365,31 +365,33 @@ Value *SCEVExpander::expandAddToGEP(const SCEV *const *op_begin,
   // the indices index into the element or field type selected by the
   // preceding index.
   for (;;) {
-    const SCEV *ElSize = SE.getAllocSizeExpr(ElTy);
     // If the scale size is not 0, attempt to factor out a scale for
     // array indexing.
     SmallVector<const SCEV *, 8> ScaledOps;
-    if (ElTy->isSized() && !ElSize->isZero()) {
-      SmallVector<const SCEV *, 8> NewOps;
-      for (unsigned i = 0, e = Ops.size(); i != e; ++i) {
-        const SCEV *Op = Ops[i];
-        const SCEV *Remainder = SE.getIntegerSCEV(0, Ty);
-        if (FactorOutConstant(Op, Remainder, ElSize, SE, SE.TD)) {
-          // Op now has ElSize factored out.
-          ScaledOps.push_back(Op);
-          if (!Remainder->isZero())
-            NewOps.push_back(Remainder);
-          AnyNonZeroIndices = true;
-        } else {
-          // The operand was not divisible, so add it to the list of operands
-          // we'll scan next iteration.
-          NewOps.push_back(Ops[i]);
+    if (ElTy->isSized()) {
+      const SCEV *ElSize = SE.getSizeOfExpr(ElTy);
+      if (!ElSize->isZero()) {
+        SmallVector<const SCEV *, 8> NewOps;
+        for (unsigned i = 0, e = Ops.size(); i != e; ++i) {
+          const SCEV *Op = Ops[i];
+          const SCEV *Remainder = SE.getIntegerSCEV(0, Ty);
+          if (FactorOutConstant(Op, Remainder, ElSize, SE, SE.TD)) {
+            // Op now has ElSize factored out.
+            ScaledOps.push_back(Op);
+            if (!Remainder->isZero())
+              NewOps.push_back(Remainder);
+            AnyNonZeroIndices = true;
+          } else {
+            // The operand was not divisible, so add it to the list of operands
+            // we'll scan next iteration.
+            NewOps.push_back(Ops[i]);
+          }
+        }
+        // If we made any changes, update Ops.
+        if (!ScaledOps.empty()) {
+          Ops = NewOps;
+          SimplifyAddOperands(Ops, Ty, SE);
         }
-      }
-      // If we made any changes, update Ops.
-      if (!ScaledOps.empty()) {
-        Ops = NewOps;
-        SimplifyAddOperands(Ops, Ty, SE);
       }
     }
 
@@ -427,22 +429,22 @@ Value *SCEVExpander::expandAddToGEP(const SCEV *const *op_begin,
             }
           }
       } else {
-        // Without TargetData, just check for a SCEVFieldOffsetExpr of the
+        // Without TargetData, just check for an offsetof expression of the
         // appropriate struct type.
         for (unsigned i = 0, e = Ops.size(); i != e; ++i)
-          if (const SCEVFieldOffsetExpr *FO =
-                dyn_cast<SCEVFieldOffsetExpr>(Ops[i]))
-            if (FO->getStructType() == STy) {
-              unsigned FieldNo = FO->getFieldNo();
-              GepIndices.push_back(
-                  ConstantInt::get(Type::getInt32Ty(Ty->getContext()),
-                                   FieldNo));
-              ElTy = STy->getTypeAtIndex(FieldNo);
+          if (const SCEVUnknown *U = dyn_cast<SCEVUnknown>(Ops[i])) {
+            const Type *CTy;
+            Constant *FieldNo;
+            if (U->isOffsetOf(CTy, FieldNo) && CTy == STy) {
+              GepIndices.push_back(FieldNo);
+              ElTy =
+                STy->getTypeAtIndex(cast<ConstantInt>(FieldNo)->getZExtValue());
               Ops[i] = SE.getConstant(Ty, 0);
               AnyNonZeroIndices = true;
               FoundFieldNo = true;
               break;
             }
+          }
       }
       // If no struct field offsets were found, tentatively assume that
       // field zero was selected (since the zero offset would obviously
@@ -639,8 +641,24 @@ SCEVExpander::getAddRecExprPHILiterally(const SCEVAddRecExpr *Normalized,
   // Reuse a previously-inserted PHI, if present.
   for (BasicBlock::iterator I = L->getHeader()->begin();
        PHINode *PN = dyn_cast<PHINode>(I); ++I)
-    if (isInsertedInstruction(PN) && SE.getSCEV(PN) == Normalized)
-      return PN;
+    if (SE.isSCEVable(PN->getType()) &&
+        (SE.getEffectiveSCEVType(PN->getType()) ==
+         SE.getEffectiveSCEVType(Normalized->getType())) &&
+        SE.getSCEV(PN) == Normalized)
+      if (BasicBlock *LatchBlock = L->getLoopLatch()) {
+        // Remember this PHI, even in post-inc mode.
+        InsertedValues.insert(PN);
+        // Remember the increment.
+        Instruction *IncV =
+          cast<Instruction>(PN->getIncomingValueForBlock(LatchBlock)
+                                  ->stripPointerCasts());
+        rememberInstruction(IncV);
+        // Make sure the increment is where we want it. But don't move it
+        // down past a potential existing post-inc user.
+        if (L == IVIncInsertLoop && !SE.DT->dominates(IncV, IVIncInsertPos))
+          IncV->moveBefore(IVIncInsertPos);
+        return PN;
+      }
 
   // Save the original insertion point so we can restore it when we're done.
   BasicBlock *SaveInsertBB = Builder.GetInsertBlock();
@@ -711,7 +729,7 @@ SCEVExpander::getAddRecExprPHILiterally(const SCEVAddRecExpr *Normalized,
 
   // Restore the original insert point.
   if (SaveInsertBB)
-    Builder.SetInsertPoint(SaveInsertBB, SaveInsertPt);
+    restoreInsertPoint(SaveInsertBB, SaveInsertPt);
 
   // Remember this PHI, even in post-inc mode.
   InsertedValues.insert(PN);
@@ -774,6 +792,7 @@ Value *SCEVExpander::expandAddRecExprLiterally(const SCEVAddRecExpr *S) {
 
   // Re-apply any non-loop-dominating scale.
   if (PostLoopScale) {
+    Result = InsertNoopCastOfTo(Result, IntTy);
     Result = Builder.CreateMul(Result,
                                expandCodeFor(PostLoopScale, IntTy));
     rememberInstruction(Result);
@@ -785,6 +804,7 @@ Value *SCEVExpander::expandAddRecExprLiterally(const SCEVAddRecExpr *S) {
       const SCEV *const OffsetArray[1] = { PostLoopOffset };
       Result = expandAddToGEP(OffsetArray, OffsetArray+1, PTy, IntTy, Result);
     } else {
+      Result = InsertNoopCastOfTo(Result, IntTy);
       Result = Builder.CreateAdd(Result,
                                  expandCodeFor(PostLoopOffset, IntTy));
       rememberInstruction(Result);
@@ -825,7 +845,7 @@ Value *SCEVExpander::visitAddRecExpr(const SCEVAddRecExpr *S) {
     while (isa<PHINode>(NewInsertPt)) ++NewInsertPt;
     V = expandCodeFor(SE.getTruncateExpr(SE.getUnknown(V), Ty), 0,
                       NewInsertPt);
-    Builder.SetInsertPoint(SaveInsertBB, SaveInsertPt);
+    restoreInsertPoint(SaveInsertBB, SaveInsertPt);
     return V;
   }
 
@@ -1001,14 +1021,6 @@ Value *SCEVExpander::visitUMaxExpr(const SCEVUMaxExpr *S) {
   return LHS;
 }
 
-Value *SCEVExpander::visitFieldOffsetExpr(const SCEVFieldOffsetExpr *S) {
-  return ConstantExpr::getOffsetOf(S->getStructType(), S->getFieldNo());
-}
-
-Value *SCEVExpander::visitAllocSizeExpr(const SCEVAllocSizeExpr *S) {
-  return ConstantExpr::getSizeOf(S->getAllocType());
-}
-
 Value *SCEVExpander::expandCodeFor(const SCEV *SH, const Type *Ty) {
   // Expand the code for this SCEV.
   Value *V = expand(SH);
@@ -1059,10 +1071,32 @@ Value *SCEVExpander::expand(const SCEV *S) {
   if (!PostIncLoop)
     InsertedExpressions[std::make_pair(S, InsertPt)] = V;
 
-  Builder.SetInsertPoint(SaveInsertBB, SaveInsertPt);
+  restoreInsertPoint(SaveInsertBB, SaveInsertPt);
   return V;
 }
 
+void SCEVExpander::rememberInstruction(Value *I) {
+  if (!PostIncLoop)
+    InsertedValues.insert(I);
+
+  // If we just claimed an existing instruction and that instruction had
+  // been the insert point, adjust the insert point forward so that 
+  // subsequently inserted code will be dominated.
+  if (Builder.GetInsertPoint() == I) {
+    BasicBlock::iterator It = cast<Instruction>(I);
+    do { ++It; } while (isInsertedInstruction(It));
+    Builder.SetInsertPoint(Builder.GetInsertBlock(), It);
+  }
+}
+
+void SCEVExpander::restoreInsertPoint(BasicBlock *BB, BasicBlock::iterator I) {
+  // If we aquired more instructions since the old insert point was saved,
+  // advance past them.
+  while (isInsertedInstruction(I)) ++I;
+
+  Builder.SetInsertPoint(BB, I);
+}
+
 /// getOrInsertCanonicalInductionVariable - This method returns the
 /// canonical induction variable of the specified type for the specified
 /// loop (inserting one if there is none).  A canonical induction variable
@@ -1077,6 +1111,6 @@ SCEVExpander::getOrInsertCanonicalInductionVariable(const Loop *L,
   BasicBlock::iterator SaveInsertPt = Builder.GetInsertPoint();
   Value *V = expandCodeFor(H, 0, L->getHeader()->begin());
   if (SaveInsertBB)
-    Builder.SetInsertPoint(SaveInsertBB, SaveInsertPt);
+    restoreInsertPoint(SaveInsertBB, SaveInsertPt);
   return V;
 }
diff --git a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
index 91e5bc3..f9331e7 100644
--- a/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
+++ b/libclamav/c++/llvm/lib/Analysis/ValueTracking.cpp
@@ -421,20 +421,29 @@ void llvm::ComputeMaskedBits(Value *V, const APInt &Mask,
   }
   case Instruction::SRem:
     if (ConstantInt *Rem = dyn_cast<ConstantInt>(I->getOperand(1))) {
-      APInt RA = Rem->getValue();
-      if (RA.isPowerOf2() || (-RA).isPowerOf2()) {
-        APInt LowBits = RA.isStrictlyPositive() ? (RA - 1) : ~RA;
+      APInt RA = Rem->getValue().abs();
+      if (RA.isPowerOf2()) {
+        APInt LowBits = RA - 1;
         APInt Mask2 = LowBits | APInt::getSignBit(BitWidth);
         ComputeMaskedBits(I->getOperand(0), Mask2, KnownZero2, KnownOne2, TD, 
                           Depth+1);
 
-        // If the sign bit of the first operand is zero, the sign bit of
-        // the result is zero. If the first operand has no one bits below
-        // the second operand's single 1 bit, its sign will be zero.
+        // The low bits of the first operand are unchanged by the srem.
+        KnownZero = KnownZero2 & LowBits;
+        KnownOne = KnownOne2 & LowBits;
+
+        // If the first operand is non-negative or has all low bits zero, then
+        // the upper bits are all zero.
         if (KnownZero2[BitWidth-1] || ((KnownZero2 & LowBits) == LowBits))
-          KnownZero2 |= ~LowBits;
+          KnownZero |= ~LowBits;
 
-        KnownZero |= KnownZero2 & Mask;
+        // If the first operand is negative and not all low bits are zero, then
+        // the upper bits are all one.
+        if (KnownOne2[BitWidth-1] && ((KnownOne2 & LowBits) != 0))
+          KnownOne |= ~LowBits;
+
+        KnownZero &= Mask;
+        KnownOne &= Mask;
 
         assert((KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"); 
       }
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
index 2a926d2..46f3cbc 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLLexer.cpp
@@ -558,6 +558,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(readnone);
   KEYWORD(readonly);
 
+  KEYWORD(inlinehint);
   KEYWORD(noinline);
   KEYWORD(alwaysinline);
   KEYWORD(optsize);
@@ -569,6 +570,7 @@ lltok::Kind LLLexer::LexIdentifier() {
 
   KEYWORD(type);
   KEYWORD(opaque);
+  KEYWORD(union);
 
   KEYWORD(eq); KEYWORD(ne); KEYWORD(slt); KEYWORD(sgt); KEYWORD(sle);
   KEYWORD(sge); KEYWORD(ult); KEYWORD(ugt); KEYWORD(ule); KEYWORD(uge);
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
index 04a5263..5cff310 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.cpp
@@ -947,6 +947,7 @@ bool LLParser::ParseOptionalAttrs(unsigned &Attrs, unsigned AttrKind) {
     case lltok::kw_noinline:        Attrs |= Attribute::NoInline; break;
     case lltok::kw_readnone:        Attrs |= Attribute::ReadNone; break;
     case lltok::kw_readonly:        Attrs |= Attribute::ReadOnly; break;
+    case lltok::kw_inlinehint:      Attrs |= Attribute::InlineHint; break;
     case lltok::kw_alwaysinline:    Attrs |= Attribute::AlwaysInline; break;
     case lltok::kw_optsize:         Attrs |= Attribute::OptimizeForSize; break;
     case lltok::kw_ssp:             Attrs |= Attribute::StackProtect; break;
@@ -955,6 +956,14 @@ bool LLParser::ParseOptionalAttrs(unsigned &Attrs, unsigned AttrKind) {
     case lltok::kw_noimplicitfloat: Attrs |= Attribute::NoImplicitFloat; break;
     case lltok::kw_naked:           Attrs |= Attribute::Naked; break;
 
+    case lltok::kw_alignstack: {
+      unsigned Alignment;
+      if (ParseOptionalStackAlignment(Alignment))
+        return true;
+      Attrs |= Attribute::constructStackAlignmentFromInt(Alignment);
+      continue;
+    }
+
     case lltok::kw_align: {
       unsigned Alignment;
       if (ParseOptionalAlignment(Alignment))
@@ -962,6 +971,7 @@ bool LLParser::ParseOptionalAttrs(unsigned &Attrs, unsigned AttrKind) {
       Attrs |= Attribute::constructAlignmentFromInt(Alignment);
       continue;
     }
+
     }
     Lex.Lex();
   }
@@ -1130,6 +1140,25 @@ bool LLParser::ParseOptionalCommaAlign(unsigned &Alignment,
   return false;
 }
 
+/// ParseOptionalStackAlignment
+///   ::= /* empty */
+///   ::= 'alignstack' '(' 4 ')'
+bool LLParser::ParseOptionalStackAlignment(unsigned &Alignment) {
+  Alignment = 0;
+  if (!EatIfPresent(lltok::kw_alignstack))
+    return false;
+  LocTy ParenLoc = Lex.getLoc();
+  if (!EatIfPresent(lltok::lparen))
+    return Error(ParenLoc, "expected '('");
+  LocTy AlignLoc = Lex.getLoc();
+  if (ParseUInt32(Alignment)) return true;
+  ParenLoc = Lex.getLoc();
+  if (!EatIfPresent(lltok::rparen))
+    return Error(ParenLoc, "expected ')'");
+  if (!isPowerOf2_32(Alignment))
+    return Error(AlignLoc, "stack alignment is not a power of two");
+  return false;
+}
 
 /// ParseIndexList - This parses the index list for an insert/extractvalue
 /// instruction.  This sets AteExtraComma in the case where we eat an extra
@@ -1266,6 +1295,11 @@ bool LLParser::ParseTypeRec(PATypeHolder &Result) {
     if (ParseStructType(Result, false))
       return true;
     break;
+  case lltok::kw_union:
+    // TypeRec ::= 'union' '{' ... '}'
+    if (ParseUnionType(Result))
+      return true;
+    break;
   case lltok::lsquare:
     // TypeRec ::= '[' ... ']'
     Lex.Lex(); // eat the lsquare.
@@ -1575,6 +1609,38 @@ bool LLParser::ParseStructType(PATypeHolder &Result, bool Packed) {
   return false;
 }
 
+/// ParseUnionType
+///   TypeRec
+///     ::= 'union' '{' TypeRec (',' TypeRec)* '}'
+bool LLParser::ParseUnionType(PATypeHolder &Result) {
+  assert(Lex.getKind() == lltok::kw_union);
+  Lex.Lex(); // Consume the 'union'
+
+  if (ParseToken(lltok::lbrace, "'{' expected after 'union'")) return true;
+
+  SmallVector<PATypeHolder, 8> ParamsList;
+  do {
+    LocTy EltTyLoc = Lex.getLoc();
+    if (ParseTypeRec(Result)) return true;
+    ParamsList.push_back(Result);
+
+    if (Result->isVoidTy())
+      return Error(EltTyLoc, "union element can not have void type");
+    if (!UnionType::isValidElementType(Result))
+      return Error(EltTyLoc, "invalid element type for union");
+
+  } while (EatIfPresent(lltok::comma)) ;
+
+  if (ParseToken(lltok::rbrace, "expected '}' at end of union"))
+    return true;
+
+  SmallVector<const Type*, 8> ParamsListTy;
+  for (unsigned i = 0, e = ParamsList.size(); i != e; ++i)
+    ParamsListTy.push_back(ParamsList[i].get());
+  Result = HandleUpRefs(UnionType::get(&ParamsListTy[0], ParamsListTy.size()));
+  return false;
+}
+
 /// ParseArrayVectorType - Parse an array or vector type, assuming the first
 /// token has already been consumed.
 ///   TypeRec
@@ -2134,8 +2200,8 @@ bool LLParser::ParseValID(ValID &ID, PerFunctionState *PFS) {
         ParseToken(lltok::rparen, "expected ')' in extractvalue constantexpr"))
       return true;
 
-    if (!isa<StructType>(Val->getType()) && !isa<ArrayType>(Val->getType()))
-      return Error(ID.Loc, "extractvalue operand must be array or struct");
+    if (!Val->getType()->isAggregateType())
+      return Error(ID.Loc, "extractvalue operand must be aggregate type");
     if (!ExtractValueInst::getIndexedType(Val->getType(), Indices.begin(),
                                           Indices.end()))
       return Error(ID.Loc, "invalid indices for extractvalue");
@@ -2155,8 +2221,8 @@ bool LLParser::ParseValID(ValID &ID, PerFunctionState *PFS) {
         ParseIndexList(Indices) ||
         ParseToken(lltok::rparen, "expected ')' in insertvalue constantexpr"))
       return true;
-    if (!isa<StructType>(Val0->getType()) && !isa<ArrayType>(Val0->getType()))
-      return Error(ID.Loc, "extractvalue operand must be array or struct");
+    if (!Val0->getType()->isAggregateType())
+      return Error(ID.Loc, "insertvalue operand must be aggregate type");
     if (!ExtractValueInst::getIndexedType(Val0->getType(), Indices.begin(),
                                           Indices.end()))
       return Error(ID.Loc, "invalid indices for insertvalue");
@@ -2492,8 +2558,17 @@ bool LLParser::ConvertValIDToValue(const Type *Ty, ValID &ID, Value *&V,
     V = Constant::getNullValue(Ty);
     return false;
   case ValID::t_Constant:
-    if (ID.ConstantVal->getType() != Ty)
+    if (ID.ConstantVal->getType() != Ty) {
+      // Allow a constant struct with a single member to be converted
+      // to a union, if the union has a member which is the same type
+      // as the struct member.
+      if (const UnionType* utype = dyn_cast<UnionType>(Ty)) {
+        return ParseUnionValue(utype, ID, V);
+      }
+
       return Error(ID.Loc, "constant expression type mismatch");
+    }
+
     V = ID.ConstantVal;
     return false;
   }
@@ -2523,6 +2598,22 @@ bool LLParser::ParseTypeAndBasicBlock(BasicBlock *&BB, LocTy &Loc,
   return false;
 }
 
+bool LLParser::ParseUnionValue(const UnionType* utype, ValID &ID, Value *&V) {
+  if (const StructType* stype = dyn_cast<StructType>(ID.ConstantVal->getType())) {
+    if (stype->getNumContainedTypes() != 1)
+      return Error(ID.Loc, "constant expression type mismatch");
+    int index = utype->getElementTypeIndex(stype->getContainedType(0));
+    if (index < 0)
+      return Error(ID.Loc, "initializer type is not a member of the union");
+
+    V = ConstantUnion::get(
+        utype, cast<Constant>(ID.ConstantVal->getOperand(0)));
+    return false;
+  }
+
+  return Error(ID.Loc, "constant expression type mismatch");
+}
+
 
 /// FunctionHeader
 ///   ::= OptionalLinkage OptionalVisibility OptionalCallingConv OptRetAttrs
@@ -2566,7 +2657,6 @@ bool LLParser::ParseFunctionHeader(Function *&Fn, bool isDefine) {
       return Error(LinkageLoc, "invalid linkage for function declaration");
     break;
   case GlobalValue::AppendingLinkage:
-  case GlobalValue::GhostLinkage:
   case GlobalValue::CommonLinkage:
     return Error(LinkageLoc, "invalid function linkage type");
   }
@@ -3783,8 +3873,8 @@ int LLParser::ParseExtractValue(Instruction *&Inst, PerFunctionState &PFS) {
       ParseIndexList(Indices, AteExtraComma))
     return true;
 
-  if (!isa<StructType>(Val->getType()) && !isa<ArrayType>(Val->getType()))
-    return Error(Loc, "extractvalue operand must be array or struct");
+  if (!Val->getType()->isAggregateType())
+    return Error(Loc, "extractvalue operand must be aggregate type");
 
   if (!ExtractValueInst::getIndexedType(Val->getType(), Indices.begin(),
                                         Indices.end()))
@@ -3805,8 +3895,8 @@ int LLParser::ParseInsertValue(Instruction *&Inst, PerFunctionState &PFS) {
       ParseIndexList(Indices, AteExtraComma))
     return true;
   
-  if (!isa<StructType>(Val0->getType()) && !isa<ArrayType>(Val0->getType()))
-    return Error(Loc0, "extractvalue operand must be array or struct");
+  if (!Val0->getType()->isAggregateType())
+    return Error(Loc0, "insertvalue operand must be aggregate type");
 
   if (!ExtractValueInst::getIndexedType(Val0->getType(), Indices.begin(),
                                         Indices.end()))
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLParser.h b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
index 85c07ff..9abe404 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLParser.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLParser.h
@@ -31,6 +31,7 @@ namespace llvm {
   class GlobalValue;
   class MDString;
   class MDNode;
+  class UnionType;
 
   /// ValID - Represents a reference of a definition of some sort with no type.
   /// There are several cases where we have to parse the value but where the
@@ -169,6 +170,7 @@ namespace llvm {
     bool ParseOptionalVisibility(unsigned &Visibility);
     bool ParseOptionalCallingConv(CallingConv::ID &CC);
     bool ParseOptionalAlignment(unsigned &Alignment);
+    bool ParseOptionalStackAlignment(unsigned &Alignment);
     bool ParseInstructionMetadata(SmallVectorImpl<std::pair<unsigned,
                                                             MDNode *> > &);
     bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);
@@ -211,6 +213,7 @@ namespace llvm {
     }
     bool ParseTypeRec(PATypeHolder &H);
     bool ParseStructType(PATypeHolder &H, bool Packed);
+    bool ParseUnionType(PATypeHolder &H);
     bool ParseArrayVectorType(PATypeHolder &H, bool isVector);
     bool ParseFunctionType(PATypeHolder &Result);
     PATypeHolder HandleUpRefs(const Type *Ty);
@@ -279,6 +282,8 @@ namespace llvm {
       return ParseTypeAndBasicBlock(BB, Loc, PFS);
     }
 
+    bool ParseUnionValue(const UnionType* utype, ValID &ID, Value *&V);
+
     struct ParamInfo {
       LocTy Loc;
       Value *V;
diff --git a/libclamav/c++/llvm/lib/AsmParser/LLToken.h b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
index 80eb194..3ac9169 100644
--- a/libclamav/c++/llvm/lib/AsmParser/LLToken.h
+++ b/libclamav/c++/llvm/lib/AsmParser/LLToken.h
@@ -85,6 +85,7 @@ namespace lltok {
     kw_readnone,
     kw_readonly,
 
+    kw_inlinehint,
     kw_noinline,
     kw_alwaysinline,
     kw_optsize,
@@ -96,6 +97,7 @@ namespace lltok {
 
     kw_type,
     kw_opaque,
+    kw_union,
 
     kw_eq, kw_ne, kw_slt, kw_sgt, kw_sle, kw_sge, kw_ult, kw_ugt, kw_ule,
     kw_uge, kw_oeq, kw_one, kw_olt, kw_ogt, kw_ole, kw_oge, kw_ord, kw_uno,
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/BitReader.cpp b/libclamav/c++/llvm/lib/Bitcode/Reader/BitReader.cpp
index 32b97e8..1facbc3 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/BitReader.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/BitReader.cpp
@@ -59,8 +59,8 @@ LLVMBool LLVMGetBitcodeModuleProvider(LLVMMemoryBufferRef MemBuf,
                                       char **OutMessage) {
   std::string Message;
 
-  *OutMP = wrap(getBitcodeModuleProvider(unwrap(MemBuf), getGlobalContext(), 
-                                         &Message));
+  *OutMP = reinterpret_cast<LLVMModuleProviderRef>(
+    getLazyBitcodeModule(unwrap(MemBuf), getGlobalContext(), &Message));
                                          
   if (!*OutMP) {
     if (OutMessage)
@@ -77,8 +77,8 @@ LLVMBool LLVMGetBitcodeModuleProviderInContext(LLVMContextRef ContextRef,
                                                char **OutMessage) {
   std::string Message;
   
-  *OutMP = wrap(getBitcodeModuleProvider(unwrap(MemBuf), *unwrap(ContextRef),
-                                         &Message));
+  *OutMP = reinterpret_cast<LLVMModuleProviderRef>(
+    getLazyBitcodeModule(unwrap(MemBuf), *unwrap(ContextRef), &Message));
   if (!*OutMP) {
     if (OutMessage)
       *OutMessage = strdup(Message.c_str());
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
index 6dae45f..a0402ca 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
@@ -28,7 +28,8 @@
 using namespace llvm;
 
 void BitcodeReader::FreeState() {
-  delete Buffer;
+  if (BufferOwned)
+    delete Buffer;
   Buffer = 0;
   std::vector<PATypeHolder>().swap(TypeList);
   ValueList.clear();
@@ -584,6 +585,13 @@ bool BitcodeReader::ParseTypeTable() {
       ResultTy = StructType::get(Context, EltTys, Record[0]);
       break;
     }
+    case bitc::TYPE_CODE_UNION: {  // UNION: [eltty x N]
+      SmallVector<const Type*, 8> EltTys;
+      for (unsigned i = 0, e = Record.size(); i != e; ++i)
+        EltTys.push_back(getTypeByID(Record[i], true));
+      ResultTy = UnionType::get(&EltTys[0], EltTys.size());
+      break;
+    }
     case bitc::TYPE_CODE_ARRAY:     // ARRAY: [numelts, eltty]
       if (Record.size() < 2)
         return Error("Invalid ARRAY type record");
@@ -1241,11 +1249,7 @@ bool BitcodeReader::RememberAndSkipFunctionBody() {
 
   // Save the current stream state.
   uint64_t CurBit = Stream.GetCurrentBitNo();
-  DeferredFunctionInfo[Fn] = std::make_pair(CurBit, Fn->getLinkage());
-
-  // Set the functions linkage to GhostLinkage so we know it is lazily
-  // deserialized.
-  Fn->setLinkage(GlobalValue::GhostLinkage);
+  DeferredFunctionInfo[Fn] = CurBit;
 
   // Skip over the function block for now.
   if (Stream.SkipBlock())
@@ -1253,17 +1257,10 @@ bool BitcodeReader::RememberAndSkipFunctionBody() {
   return false;
 }
 
-bool BitcodeReader::ParseModule(const std::string &ModuleID) {
-  // Reject multiple MODULE_BLOCK's in a single bitstream.
-  if (TheModule)
-    return Error("Multiple MODULE_BLOCKs in same stream");
-
+bool BitcodeReader::ParseModule() {
   if (Stream.EnterSubBlock(bitc::MODULE_BLOCK_ID))
     return Error("Malformed block record");
 
-  // Otherwise, create the module.
-  TheModule = new Module(ModuleID, Context);
-
   SmallVector<uint64_t, 64> Record;
   std::vector<std::string> SectionTable;
   std::vector<std::string> GCTable;
@@ -1520,7 +1517,7 @@ bool BitcodeReader::ParseModule(const std::string &ModuleID) {
   return Error("Premature end of bitstream");
 }
 
-bool BitcodeReader::ParseBitcode() {
+bool BitcodeReader::ParseBitcodeInto(Module *M) {
   TheModule = 0;
 
   if (Buffer->getBufferSize() & 3)
@@ -1564,7 +1561,11 @@ bool BitcodeReader::ParseBitcode() {
         return Error("Malformed BlockInfoBlock");
       break;
     case bitc::MODULE_BLOCK_ID:
-      if (ParseModule(Buffer->getBufferIdentifier()))
+      // Reject multiple MODULE_BLOCK's in a single bitstream.
+      if (TheModule)
+        return Error("Multiple MODULE_BLOCKs in same stream");
+      TheModule = M;
+      if (ParseModule())
         return true;
       break;
     default:
@@ -2299,22 +2300,28 @@ bool BitcodeReader::ParseFunctionBody(Function *F) {
 }
 
 //===----------------------------------------------------------------------===//
-// ModuleProvider implementation
+// GVMaterializer implementation
 //===----------------------------------------------------------------------===//
 
 
-bool BitcodeReader::materializeFunction(Function *F, std::string *ErrInfo) {
-  // If it already is material, ignore the request.
-  if (!F->hasNotBeenReadFromBitcode()) return false;
+bool BitcodeReader::isMaterializable(const GlobalValue *GV) const {
+  if (const Function *F = dyn_cast<Function>(GV)) {
+    return F->isDeclaration() &&
+      DeferredFunctionInfo.count(const_cast<Function*>(F));
+  }
+  return false;
+}
 
-  DenseMap<Function*, std::pair<uint64_t, unsigned> >::iterator DFII =
-    DeferredFunctionInfo.find(F);
+bool BitcodeReader::Materialize(GlobalValue *GV, std::string *ErrInfo) {
+  Function *F = dyn_cast<Function>(GV);
+  // If it's not a function or is already material, ignore the request.
+  if (!F || !F->isMaterializable()) return false;
+
+  DenseMap<Function*, uint64_t>::iterator DFII = DeferredFunctionInfo.find(F);
   assert(DFII != DeferredFunctionInfo.end() && "Deferred function not found!");
 
-  // Move the bit stream to the saved position of the deferred function body and
-  // restore the real linkage type for the function.
-  Stream.JumpToBit(DFII->second.first);
-  F->setLinkage((GlobalValue::LinkageTypes)DFII->second.second);
+  // Move the bit stream to the saved position of the deferred function body.
+  Stream.JumpToBit(DFII->second);
 
   if (ParseFunctionBody(F)) {
     if (ErrInfo) *ErrInfo = ErrorString;
@@ -2336,27 +2343,36 @@ bool BitcodeReader::materializeFunction(Function *F, std::string *ErrInfo) {
   return false;
 }
 
-void BitcodeReader::dematerializeFunction(Function *F) {
-  // If this function isn't materialized, or if it is a proto, this is a noop.
-  if (F->hasNotBeenReadFromBitcode() || F->isDeclaration())
+bool BitcodeReader::isDematerializable(const GlobalValue *GV) const {
+  const Function *F = dyn_cast<Function>(GV);
+  if (!F || F->isDeclaration())
+    return false;
+  return DeferredFunctionInfo.count(const_cast<Function*>(F));
+}
+
+void BitcodeReader::Dematerialize(GlobalValue *GV) {
+  Function *F = dyn_cast<Function>(GV);
+  // If this function isn't dematerializable, this is a noop.
+  if (!F || !isDematerializable(F))
     return;
 
   assert(DeferredFunctionInfo.count(F) && "No info to read function later?");
 
   // Just forget the function body, we can remat it later.
   F->deleteBody();
-  F->setLinkage(GlobalValue::GhostLinkage);
 }
 
 
-Module *BitcodeReader::materializeModule(std::string *ErrInfo) {
+bool BitcodeReader::MaterializeModule(Module *M, std::string *ErrInfo) {
+  assert(M == TheModule &&
+         "Can only Materialize the Module this BitcodeReader is attached to.");
   // Iterate over the module, deserializing any functions that are still on
   // disk.
   for (Module::iterator F = TheModule->begin(), E = TheModule->end();
        F != E; ++F)
-    if (F->hasNotBeenReadFromBitcode() &&
-        materializeFunction(F, ErrInfo))
-      return 0;
+    if (F->isMaterializable() &&
+        Materialize(F, ErrInfo))
+      return true;
 
   // Upgrade any intrinsic calls that slipped through (should not happen!) and
   // delete the old functions to clean up. We can't do this unless the entire
@@ -2380,19 +2396,7 @@ Module *BitcodeReader::materializeModule(std::string *ErrInfo) {
   // Check debug info intrinsics.
   CheckDebugInfoIntrinsics(TheModule);
 
-  return TheModule;
-}
-
-
-/// This method is provided by the parent ModuleProvde class and overriden
-/// here. It simply releases the module from its provided and frees up our
-/// state.
-/// @brief Release our hold on the generated module
-Module *BitcodeReader::releaseModule(std::string *ErrInfo) {
-  // Since we're losing control of this Module, we must hand it back complete
-  Module *M = ModuleProvider::releaseModule(ErrInfo);
-  FreeState();
-  return M;
+  return false;
 }
 
 
@@ -2400,45 +2404,41 @@ Module *BitcodeReader::releaseModule(std::string *ErrInfo) {
 // External interface
 //===----------------------------------------------------------------------===//
 
-/// getBitcodeModuleProvider - lazy function-at-a-time loading from a file.
+/// getLazyBitcodeModule - lazy function-at-a-time loading from a file.
 ///
-ModuleProvider *llvm::getBitcodeModuleProvider(MemoryBuffer *Buffer,
-                                               LLVMContext& Context,
-                                               std::string *ErrMsg) {
+Module *llvm::getLazyBitcodeModule(MemoryBuffer *Buffer,
+                                   LLVMContext& Context,
+                                   std::string *ErrMsg) {
+  Module *M = new Module(Buffer->getBufferIdentifier(), Context);
   BitcodeReader *R = new BitcodeReader(Buffer, Context);
-  if (R->ParseBitcode()) {
+  M->setMaterializer(R);
+  if (R->ParseBitcodeInto(M)) {
     if (ErrMsg)
       *ErrMsg = R->getErrorString();
 
-    // Don't let the BitcodeReader dtor delete 'Buffer'.
-    R->releaseMemoryBuffer();
-    delete R;
+    delete M;  // Also deletes R.
     return 0;
   }
-  return R;
+  // Have the BitcodeReader dtor delete 'Buffer'.
+  R->setBufferOwned(true);
+  return M;
 }
 
 /// ParseBitcodeFile - Read the specified bitcode file, returning the module.
 /// If an error occurs, return null and fill in *ErrMsg if non-null.
 Module *llvm::ParseBitcodeFile(MemoryBuffer *Buffer, LLVMContext& Context,
                                std::string *ErrMsg){
-  BitcodeReader *R;
-  R = static_cast<BitcodeReader*>(getBitcodeModuleProvider(Buffer, Context,
-                                                           ErrMsg));
-  if (!R) return 0;
-
-  // Read in the entire module.
-  Module *M = R->materializeModule(ErrMsg);
+  Module *M = getLazyBitcodeModule(Buffer, Context, ErrMsg);
+  if (!M) return 0;
 
   // Don't let the BitcodeReader dtor delete 'Buffer', regardless of whether
   // there was an error.
-  R->releaseMemoryBuffer();
+  static_cast<BitcodeReader*>(M->getMaterializer())->setBufferOwned(false);
 
-  // If there was no error, tell ModuleProvider not to delete it when its dtor
-  // is run.
-  if (M)
-    M = R->releaseModule(ErrMsg);
-
-  delete R;
+  // Read in the entire module, and destroy the BitcodeReader.
+  if (M->MaterializeAllPermanently(ErrMsg)) {
+    delete M;
+    return NULL;
+  }
   return M;
 }
diff --git a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
index bb3961a..55c71f7 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
+++ b/libclamav/c++/llvm/lib/Bitcode/Reader/BitcodeReader.h
@@ -14,7 +14,7 @@
 #ifndef BITCODE_READER_H
 #define BITCODE_READER_H
 
-#include "llvm/ModuleProvider.h"
+#include "llvm/GVMaterializer.h"
 #include "llvm/Attributes.h"
 #include "llvm/Type.h"
 #include "llvm/OperandTraits.h"
@@ -121,9 +121,11 @@ public:
   void AssignValue(Value *V, unsigned Idx);
 };
 
-class BitcodeReader : public ModuleProvider {
+class BitcodeReader : public GVMaterializer {
   LLVMContext &Context;
+  Module *TheModule;
   MemoryBuffer *Buffer;
+  bool BufferOwned;
   BitstreamReader StreamFile;
   BitstreamCursor Stream;
   
@@ -160,9 +162,9 @@ class BitcodeReader : public ModuleProvider {
   bool HasReversedFunctionsWithBodies;
   
   /// DeferredFunctionInfo - When function bodies are initially scanned, this
-  /// map contains info about where to find deferred function body (in the
-  /// stream) and what linkage the original function had.
-  DenseMap<Function*, std::pair<uint64_t, unsigned> > DeferredFunctionInfo;
+  /// map contains info about where to find deferred function body in the
+  /// stream.
+  DenseMap<Function*, uint64_t> DeferredFunctionInfo;
   
   /// BlockAddrFwdRefs - These are blockaddr references to basic blocks.  These
   /// are resolved lazily when functions are loaded.
@@ -171,7 +173,8 @@ class BitcodeReader : public ModuleProvider {
   
 public:
   explicit BitcodeReader(MemoryBuffer *buffer, LLVMContext &C)
-    : Context(C), Buffer(buffer), ErrorString(0), ValueList(C), MDValueList(C) {
+    : Context(C), TheModule(0), Buffer(buffer), BufferOwned(false),
+      ErrorString(0), ValueList(C), MDValueList(C) {
     HasReversedFunctionsWithBodies = false;
   }
   ~BitcodeReader() {
@@ -180,17 +183,15 @@ public:
   
   void FreeState();
   
-  /// releaseMemoryBuffer - This causes the reader to completely forget about
-  /// the memory buffer it contains, which prevents the buffer from being
-  /// destroyed when it is deleted.
-  void releaseMemoryBuffer() {
-    Buffer = 0;
-  }
+  /// setBufferOwned - If this is true, the reader will destroy the MemoryBuffer
+  /// when the reader is destroyed.
+  void setBufferOwned(bool Owned) { BufferOwned = Owned; }
   
-  virtual bool materializeFunction(Function *F, std::string *ErrInfo = 0);
-  virtual Module *materializeModule(std::string *ErrInfo = 0);
-  virtual void dematerializeFunction(Function *F);
-  virtual Module *releaseModule(std::string *ErrInfo = 0);
+  virtual bool isMaterializable(const GlobalValue *GV) const;
+  virtual bool isDematerializable(const GlobalValue *GV) const;
+  virtual bool Materialize(GlobalValue *GV, std::string *ErrInfo = 0);
+  virtual bool MaterializeModule(Module *M, std::string *ErrInfo = 0);
+  virtual void Dematerialize(GlobalValue *GV);
 
   bool Error(const char *Str) {
     ErrorString = Str;
@@ -200,7 +201,7 @@ public:
   
   /// @brief Main interface to parsing a bitcode buffer.
   /// @returns true if an error occurred.
-  bool ParseBitcode();
+  bool ParseBitcodeInto(Module *M);
 private:
   const Type *getTypeByID(unsigned ID, bool isTypeTable = false);
   Value *getFnValueByID(unsigned ID, const Type *Ty) {
@@ -248,7 +249,7 @@ private:
   }
 
   
-  bool ParseModule(const std::string &ModuleID);
+  bool ParseModule();
   bool ParseAttributeBlock();
   bool ParseTypeTable();
   bool ParseTypeSymbolTable();
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
index 5a4a1b2..82e73b5 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -181,6 +181,14 @@ static void WriteTypeTable(const ValueEnumerator &VE, BitstreamWriter &Stream) {
                             Log2_32_Ceil(VE.getTypes().size()+1)));
   unsigned StructAbbrev = Stream.EmitAbbrev(Abbv);
 
+  // Abbrev for TYPE_CODE_UNION.
+  Abbv = new BitCodeAbbrev();
+  Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_UNION));
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,
+                            Log2_32_Ceil(VE.getTypes().size()+1)));
+  unsigned UnionAbbrev = Stream.EmitAbbrev(Abbv);
+
   // Abbrev for TYPE_CODE_ARRAY.
   Abbv = new BitCodeAbbrev();
   Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_ARRAY));
@@ -250,6 +258,17 @@ static void WriteTypeTable(const ValueEnumerator &VE, BitstreamWriter &Stream) {
       AbbrevToUse = StructAbbrev;
       break;
     }
+    case Type::UnionTyID: {
+      const UnionType *UT = cast<UnionType>(T);
+      // UNION: [eltty x N]
+      Code = bitc::TYPE_CODE_UNION;
+      // Output all of the element types.
+      for (UnionType::element_iterator I = UT->element_begin(),
+           E = UT->element_end(); I != E; ++I)
+        TypeVals.push_back(VE.getTypeID(*I));
+      AbbrevToUse = UnionAbbrev;
+      break;
+    }
     case Type::ArrayTyID: {
       const ArrayType *AT = cast<ArrayType>(T);
       // ARRAY: [numelts, eltty]
@@ -280,7 +299,6 @@ static void WriteTypeTable(const ValueEnumerator &VE, BitstreamWriter &Stream) {
 static unsigned getEncodedLinkage(const GlobalValue *GV) {
   switch (GV->getLinkage()) {
   default: llvm_unreachable("Invalid linkage!");
-  case GlobalValue::GhostLinkage:  // Map ghost linkage onto external.
   case GlobalValue::ExternalLinkage:            return 0;
   case GlobalValue::WeakAnyLinkage:             return 1;
   case GlobalValue::AppendingLinkage:           return 2;
@@ -499,7 +517,7 @@ static void WriteModuleMetadata(const ValueEnumerator &VE,
   for (unsigned i = 0, e = Vals.size(); i != e; ++i) {
 
     if (const MDNode *N = dyn_cast<MDNode>(Vals[i].first)) {
-      if (!N->isFunctionLocal()) {
+      if (!N->isFunctionLocal() || !N->getFunction()) {
         if (!StartedMetadataBlock) {
           Stream.EnterSubblock(bitc::METADATA_BLOCK_ID, 3);
           StartedMetadataBlock = true;
@@ -563,7 +581,7 @@ static void WriteFunctionLocalMetadata(const Function &F,
   
   for (unsigned i = 0, e = Vals.size(); i != e; ++i)
     if (const MDNode *N = dyn_cast<MDNode>(Vals[i].first))
-      if (N->getFunction() == &F) {
+      if (N->isFunctionLocal() && N->getFunction() == &F) {
         if (!StartedMetadataBlock) {
           Stream.EnterSubblock(bitc::METADATA_BLOCK_ID, 3);
           StartedMetadataBlock = true;
@@ -790,7 +808,7 @@ static void WriteConstants(unsigned FirstVal, unsigned LastVal,
       else if (isCStr7)
         AbbrevToUse = CString7Abbrev;
     } else if (isa<ConstantArray>(C) || isa<ConstantStruct>(V) ||
-               isa<ConstantVector>(V)) {
+               isa<ConstantUnion>(C) || isa<ConstantVector>(V)) {
       Code = bitc::CST_CODE_AGGREGATE;
       for (unsigned i = 0, e = C->getNumOperands(); i != e; ++i)
         Record.push_back(VE.getValueID(C->getOperand(i)));
@@ -1511,16 +1529,50 @@ enum {
   DarwinBCHeaderSize = 5*4
 };
 
+/// isARMTriplet - Return true if the triplet looks like:
+/// arm-*, thumb-*, armv[0-9]-*, thumbv[0-9]-*, armv5te-*, or armv6t2-*.
+static bool isARMTriplet(const std::string &TT) {
+  size_t Pos = 0;
+  size_t Size = TT.size();
+  if (Size >= 6 &&
+      TT[0] == 't' && TT[1] == 'h' && TT[2] == 'u' &&
+      TT[3] == 'm' && TT[4] == 'b')
+    Pos = 5;
+  else if (Size >= 4 && TT[0] == 'a' && TT[1] == 'r' && TT[2] == 'm')
+    Pos = 3;
+  else
+    return false;
+
+  if (TT[Pos] == '-')
+    return true;
+  else if (TT[Pos] == 'v') {
+    if (Size >= Pos+4 &&
+        TT[Pos+1] == '6' && TT[Pos+2] == 't' && TT[Pos+3] == '2')
+      return true;
+    else if (Size >= Pos+4 &&
+             TT[Pos+1] == '5' && TT[Pos+2] == 't' && TT[Pos+3] == 'e')
+      return true;
+  } else
+    return false;
+  while (++Pos < Size && TT[Pos] != '-') {
+    if (!isdigit(TT[Pos]))
+      return false;
+  }
+  return true;
+}
+
 static void EmitDarwinBCHeader(BitstreamWriter &Stream,
                                const std::string &TT) {
   unsigned CPUType = ~0U;
 
-  // Match x86_64-*, i[3-9]86-*, powerpc-*, powerpc64-*.  The CPUType is a
-  // magic number from /usr/include/mach/machine.h.  It is ok to reproduce the
+  // Match x86_64-*, i[3-9]86-*, powerpc-*, powerpc64-*, arm-*, thumb-*,
+  // armv[0-9]-*, thumbv[0-9]-*, armv5te-*, or armv6t2-*. The CPUType is a magic
+  // number from /usr/include/mach/machine.h.  It is ok to reproduce the
   // specific constants here because they are implicitly part of the Darwin ABI.
   enum {
     DARWIN_CPU_ARCH_ABI64      = 0x01000000,
     DARWIN_CPU_TYPE_X86        = 7,
+    DARWIN_CPU_TYPE_ARM        = 12,
     DARWIN_CPU_TYPE_POWERPC    = 18
   };
 
@@ -1533,6 +1585,8 @@ static void EmitDarwinBCHeader(BitstreamWriter &Stream,
     CPUType = DARWIN_CPU_TYPE_POWERPC;
   else if (TT.find("powerpc64-") == 0)
     CPUType = DARWIN_CPU_TYPE_POWERPC | DARWIN_CPU_ARCH_ABI64;
+  else if (isARMTriplet(TT))
+    CPUType = DARWIN_CPU_TYPE_ARM;
 
   // Traditional Bitcode starts after header.
   unsigned BCOffset = DarwinBCHeaderSize;
diff --git a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
index c46d735..595497f 100644
--- a/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
+++ b/libclamav/c++/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
@@ -93,7 +93,7 @@ ValueEnumerator::ValueEnumerator(const Module *M) {
         for (User::const_op_iterator OI = I->op_begin(), E = I->op_end();
              OI != E; ++OI) {
           if (MDNode *MD = dyn_cast<MDNode>(*OI))
-            if (MD->isFunctionLocal())
+            if (MD->isFunctionLocal() && MD->getFunction())
               // These will get enumerated during function-incorporation.
               continue;
           EnumerateOperandType(*OI);
@@ -408,21 +408,25 @@ void ValueEnumerator::incorporateFunction(const Function &F) {
 
   FirstInstID = Values.size();
 
+  SmallVector<MDNode *, 8> FunctionLocalMDs;
   // Add all of the instructions.
   for (Function::const_iterator BB = F.begin(), E = F.end(); BB != E; ++BB) {
     for (BasicBlock::const_iterator I = BB->begin(), E = BB->end(); I!=E; ++I) {
       for (User::const_op_iterator OI = I->op_begin(), E = I->op_end();
            OI != E; ++OI) {
         if (MDNode *MD = dyn_cast<MDNode>(*OI))
-          if (!MD->isFunctionLocal())
-              // These were already enumerated during ValueEnumerator creation.
-              continue;
-        EnumerateOperandType(*OI);
+          if (MD->isFunctionLocal() && MD->getFunction())
+            // Enumerate metadata after the instructions they might refer to.
+            FunctionLocalMDs.push_back(MD);
       }
       if (!I->getType()->isVoidTy())
         EnumerateValue(I);
     }
   }
+
+  // Add all of the function-local metadata.
+  for (unsigned i = 0, e = FunctionLocalMDs.size(); i != e; ++i)
+    EnumerateOperandType(FunctionLocalMDs[i]);
 }
 
 void ValueEnumerator::purgeFunction() {
diff --git a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
index ca1f4a3..8840622 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
@@ -425,8 +425,7 @@ void AggressiveAntiDepBreaker::PrescanInstruction(MachineInstr *MI,
     unsigned Reg = MO.getReg();
     if (Reg == 0) continue;
     // Ignore KILLs and passthru registers for liveness...
-    if ((MI->getOpcode() == TargetInstrInfo::KILL) ||
-        (PassthruRegs.count(Reg) != 0))
+    if (MI->isKill() || (PassthruRegs.count(Reg) != 0))
       continue;
 
     // Update def for Reg and aliases.
@@ -481,7 +480,7 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr *MI,
 
   // Form a group of all defs and uses of a KILL instruction to ensure
   // that all registers are renamed as a group.
-  if (MI->getOpcode() == TargetInstrInfo::KILL) {
+  if (MI->isKill()) {
     DEBUG(dbgs() << "\tKill Group:");
 
     unsigned FirstReg = 0;
@@ -792,7 +791,7 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
 
     // Ignore KILL instructions (they form a group in ScanInstruction
     // but don't cause any anti-dependence breaking themselves)
-    if (MI->getOpcode() != TargetInstrInfo::KILL) {
+    if (!MI->isKill()) {
       // Attempt to break each anti-dependency...
       for (unsigned i = 0, e = Edges.size(); i != e; ++i) {
         SDep *Edge = Edges[i];
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 42bf352..fc08384 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -11,6 +11,7 @@
 //
 //===----------------------------------------------------------------------===//
 
+#define DEBUG_TYPE "asm-printer"
 #include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/DerivedTypes.h"
@@ -24,6 +25,7 @@
 #include "llvm/CodeGen/MachineJumpTableInfo.h"
 #include "llvm/CodeGen/MachineLoopInfo.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/Analysis/ConstantFolding.h"
 #include "llvm/Analysis/DebugInfo.h"
 #include "llvm/MC/MCContext.h"
 #include "llvm/MC/MCExpr.h"
@@ -31,10 +33,6 @@
 #include "llvm/MC/MCSection.h"
 #include "llvm/MC/MCStreamer.h"
 #include "llvm/MC/MCSymbol.h"
-#include "llvm/Support/CommandLine.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/Format.h"
-#include "llvm/Support/FormattedStream.h"
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/Target/Mangler.h"
 #include "llvm/Target/TargetData.h"
@@ -45,37 +43,27 @@
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallString.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/Format.h"
+#include "llvm/Support/FormattedStream.h"
 #include <cerrno>
 using namespace llvm;
 
-static cl::opt<cl::boolOrDefault>
-AsmVerbose("asm-verbose", cl::desc("Add comments to directives."),
-           cl::init(cl::BOU_UNSET));
-
-static bool getVerboseAsm(bool VDef) {
-  switch (AsmVerbose) {
-  default:
-  case cl::BOU_UNSET: return VDef;
-  case cl::BOU_TRUE:  return true;
-  case cl::BOU_FALSE: return false;
-  }      
-}
+STATISTIC(EmittedInsts, "Number of machine instrs printed");
 
 char AsmPrinter::ID = 0;
 AsmPrinter::AsmPrinter(formatted_raw_ostream &o, TargetMachine &tm,
-                       const MCAsmInfo *T, bool VDef)
+                       MCContext &Ctx, MCStreamer &Streamer,
+                       const MCAsmInfo *T)
   : MachineFunctionPass(&ID), O(o),
     TM(tm), MAI(T), TRI(tm.getRegisterInfo()),
-
-    OutContext(*new MCContext()),
-    // FIXME: Pass instprinter to streamer.
-    OutStreamer(*createAsmStreamer(OutContext, O, *T,
-                                   TM.getTargetData()->isLittleEndian(),
-                                   getVerboseAsm(VDef), 0)),
-
+    OutContext(Ctx), OutStreamer(Streamer),
     LastMI(0), LastFn(0), Counter(~0U), PrevDLT(NULL) {
   DW = 0; MMI = 0;
-  VerboseAsm = getVerboseAsm(VDef);
+  VerboseAsm = Streamer.isVerboseAsm();
 }
 
 AsmPrinter::~AsmPrinter() {
@@ -150,6 +138,52 @@ bool AsmPrinter::doInitialization(Module &M) {
   return false;
 }
 
+void AsmPrinter::EmitLinkage(unsigned Linkage, MCSymbol *GVSym) const {
+  switch ((GlobalValue::LinkageTypes)Linkage) {
+  case GlobalValue::CommonLinkage:
+  case GlobalValue::LinkOnceAnyLinkage:
+  case GlobalValue::LinkOnceODRLinkage:
+  case GlobalValue::WeakAnyLinkage:
+  case GlobalValue::WeakODRLinkage:
+  case GlobalValue::LinkerPrivateLinkage:
+    if (MAI->getWeakDefDirective() != 0) {
+      // .globl _foo
+      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
+      // .weak_definition _foo
+      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_WeakDefinition);
+    } else if (const char *LinkOnce = MAI->getLinkOnceDirective()) {
+      // .globl _foo
+      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
+      // FIXME: linkonce should be a section attribute, handled by COFF Section
+      // assignment.
+      // http://sourceware.org/binutils/docs-2.20/as/Linkonce.html#Linkonce
+      // .linkonce discard
+      // FIXME: It would be nice to use .linkonce samesize for non-common
+      // globals.
+      O << LinkOnce;
+    } else {
+      // .weak _foo
+      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Weak);
+    }
+    break;
+  case GlobalValue::DLLExportLinkage:
+  case GlobalValue::AppendingLinkage:
+    // FIXME: appending linkage variables should go into a section of
+    // their name or something.  For now, just emit them as external.
+  case GlobalValue::ExternalLinkage:
+    // If external or appending, declare as a global symbol.
+    // .globl _foo
+    OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
+    break;
+  case GlobalValue::PrivateLinkage:
+  case GlobalValue::InternalLinkage:
+    break;
+  default:
+    llvm_unreachable("Unknown linkage type!");
+  }
+}
+
+
 /// EmitGlobalVariable - Emit the specified global variable to the .s file.
 void AsmPrinter::EmitGlobalVariable(const GlobalVariable *GV) {
   if (!GV->hasInitializer())   // External globals require no code.
@@ -160,7 +194,7 @@ void AsmPrinter::EmitGlobalVariable(const GlobalVariable *GV) {
     return;
 
   MCSymbol *GVSym = GetGlobalValueSymbol(GV);
-  printVisibility(GVSym, GV->getVisibility());
+  EmitVisibility(GVSym, GV->getVisibility());
 
   if (MAI->hasDotTypeDotSizeDirective())
     OutStreamer.EmitSymbolAttribute(GVSym, MCSA_ELF_TypeObject);
@@ -225,50 +259,9 @@ void AsmPrinter::EmitGlobalVariable(const GlobalVariable *GV) {
 
   OutStreamer.SwitchSection(TheSection);
 
-  // TODO: Factor into an 'emit linkage' thing that is shared with function
-  // bodies.
-  switch (GV->getLinkage()) {
-  case GlobalValue::CommonLinkage:
-  case GlobalValue::LinkOnceAnyLinkage:
-  case GlobalValue::LinkOnceODRLinkage:
-  case GlobalValue::WeakAnyLinkage:
-  case GlobalValue::WeakODRLinkage:
-  case GlobalValue::LinkerPrivateLinkage:
-    if (MAI->getWeakDefDirective() != 0) {
-      // .globl _foo
-      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
-      // .weak_definition _foo
-      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_WeakDefinition);
-    } else if (const char *LinkOnce = MAI->getLinkOnceDirective()) {
-      // .globl _foo
-      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
-      // FIXME: linkonce should be a section attribute, handled by COFF Section
-      // assignment.
-      // http://sourceware.org/binutils/docs-2.20/as/Linkonce.html#Linkonce
-      // .linkonce same_size
-      O << LinkOnce;
-    } else {
-      // .weak _foo
-      OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Weak);
-    }
-    break;
-  case GlobalValue::DLLExportLinkage:
-  case GlobalValue::AppendingLinkage:
-    // FIXME: appending linkage variables should go into a section of
-    // their name or something.  For now, just emit them as external.
-  case GlobalValue::ExternalLinkage:
-    // If external or appending, declare as a global symbol.
-    // .globl _foo
-    OutStreamer.EmitSymbolAttribute(GVSym, MCSA_Global);
-    break;
-  case GlobalValue::PrivateLinkage:
-  case GlobalValue::InternalLinkage:
-     break;
-  default:
-    llvm_unreachable("Unknown linkage type!");
-  }
-
+  EmitLinkage(GV->getLinkage(), GVSym);
   EmitAlignment(AlignLog, GV);
+
   if (VerboseAsm) {
     WriteAsOperand(OutStreamer.GetCommentOS(), GV,
                    /*PrintType=*/false, GV->getParent());
@@ -285,6 +278,183 @@ void AsmPrinter::EmitGlobalVariable(const GlobalVariable *GV) {
   OutStreamer.AddBlankLine();
 }
 
+/// EmitFunctionHeader - This method emits the header for the current
+/// function.
+void AsmPrinter::EmitFunctionHeader() {
+  // Print out constants referenced by the function
+  EmitConstantPool();
+  
+  // Print the 'header' of function.
+  const Function *F = MF->getFunction();
+
+  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
+  EmitVisibility(CurrentFnSym, F->getVisibility());
+
+  EmitLinkage(F->getLinkage(), CurrentFnSym);
+  EmitAlignment(MF->getAlignment(), F);
+
+  if (MAI->hasDotTypeDotSizeDirective())
+    OutStreamer.EmitSymbolAttribute(CurrentFnSym, MCSA_ELF_TypeFunction);
+
+  if (VerboseAsm) {
+    WriteAsOperand(OutStreamer.GetCommentOS(), F,
+                   /*PrintType=*/false, F->getParent());
+    OutStreamer.GetCommentOS() << '\n';
+  }
+
+  // Emit the CurrentFnSym.  This is a virtual function to allow targets to
+  // do their wild and crazy things as required.
+  EmitFunctionEntryLabel();
+  
+  // Add some workaround for linkonce linkage on Cygwin\MinGW.
+  if (MAI->getLinkOnceDirective() != 0 &&
+      (F->hasLinkOnceLinkage() || F->hasWeakLinkage()))
+    // FIXME: What is this?
+    O << "Lllvm$workaround$fake$stub$" << *CurrentFnSym << ":\n";
+  
+  // Emit pre-function debug and/or EH information.
+  if (MAI->doesSupportDebugInformation() || MAI->doesSupportExceptionHandling())
+    DW->BeginFunction(MF);
+}
+
+/// EmitFunctionEntryLabel - Emit the label that is the entrypoint for the
+/// function.  This can be overridden by targets as required to do custom stuff.
+void AsmPrinter::EmitFunctionEntryLabel() {
+  OutStreamer.EmitLabel(CurrentFnSym);
+}
+
+
+/// EmitComments - Pretty-print comments for instructions.
+static void EmitComments(const MachineInstr &MI, raw_ostream &CommentOS) {
+  const MachineFunction *MF = MI.getParent()->getParent();
+  const TargetMachine &TM = MF->getTarget();
+  
+  if (!MI.getDebugLoc().isUnknown()) {
+    DILocation DLT = MF->getDILocation(MI.getDebugLoc());
+    
+    // Print source line info.
+    DIScope Scope = DLT.getScope();
+    // Omit the directory, because it's likely to be long and uninteresting.
+    if (!Scope.isNull())
+      CommentOS << Scope.getFilename();
+    else
+      CommentOS << "<unknown>";
+    CommentOS << ':' << DLT.getLineNumber();
+    if (DLT.getColumnNumber() != 0)
+      CommentOS << ':' << DLT.getColumnNumber();
+    CommentOS << '\n';
+  }
+  
+  // Check for spills and reloads
+  int FI;
+  
+  const MachineFrameInfo *FrameInfo = MF->getFrameInfo();
+  
+  // We assume a single instruction only has a spill or reload, not
+  // both.
+  const MachineMemOperand *MMO;
+  if (TM.getInstrInfo()->isLoadFromStackSlotPostFE(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      MMO = *MI.memoperands_begin();
+      CommentOS << MMO->getSize() << "-byte Reload\n";
+    }
+  } else if (TM.getInstrInfo()->hasLoadFromStackSlot(&MI, MMO, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI))
+      CommentOS << MMO->getSize() << "-byte Folded Reload\n";
+  } else if (TM.getInstrInfo()->isStoreToStackSlotPostFE(&MI, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
+      MMO = *MI.memoperands_begin();
+      CommentOS << MMO->getSize() << "-byte Spill\n";
+    }
+  } else if (TM.getInstrInfo()->hasStoreToStackSlot(&MI, MMO, FI)) {
+    if (FrameInfo->isSpillSlotObjectIndex(FI))
+      CommentOS << MMO->getSize() << "-byte Folded Spill\n";
+  }
+  
+  // Check for spill-induced copies
+  unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
+  if (TM.getInstrInfo()->isMoveInstr(MI, SrcReg, DstReg,
+                                     SrcSubIdx, DstSubIdx)) {
+    if (MI.getAsmPrinterFlag(MachineInstr::ReloadReuse))
+      CommentOS << " Reload Reuse\n";
+  }
+}
+
+
+
+/// EmitFunctionBody - This method emits the body and trailer for a
+/// function.
+void AsmPrinter::EmitFunctionBody() {
+  // Emit target-specific gunk before the function body.
+  EmitFunctionBodyStart();
+  
+  // Print out code for the function.
+  bool HasAnyRealCode = false;
+  for (MachineFunction::const_iterator I = MF->begin(), E = MF->end();
+       I != E; ++I) {
+    // Print a label for the basic block.
+    EmitBasicBlockStart(I);
+    for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
+         II != IE; ++II) {
+      // Print the assembly for the instruction.
+      if (!II->isLabel())
+        HasAnyRealCode = true;
+      
+      ++EmittedInsts;
+      
+      // FIXME: Clean up processDebugLoc.
+      processDebugLoc(II, true);
+      
+      if (VerboseAsm)
+        EmitComments(*II, OutStreamer.GetCommentOS());
+
+      switch (II->getOpcode()) {
+      case TargetOpcode::DBG_LABEL:
+      case TargetOpcode::EH_LABEL:
+      case TargetOpcode::GC_LABEL:
+        printLabelInst(II);
+        break;
+      case TargetOpcode::INLINEASM:
+        printInlineAsm(II);
+        break;
+      case TargetOpcode::IMPLICIT_DEF:
+        printImplicitDef(II);
+        break;
+      case TargetOpcode::KILL:
+        printKill(II);
+        break;
+      default:
+        EmitInstruction(II);
+        break;
+      }
+      
+      // FIXME: Clean up processDebugLoc.
+      processDebugLoc(II, false);
+    }
+  }
+  
+  // If the function is empty and the object file uses .subsections_via_symbols,
+  // then we need to emit *something* to the function body to prevent the
+  // labels from collapsing together.  Just emit a 0 byte.
+  if (MAI->hasSubsectionsViaSymbols() && !HasAnyRealCode)
+    OutStreamer.EmitIntValue(0, 1, 0/*addrspace*/);
+  
+  // Emit target-specific gunk after the function body.
+  EmitFunctionBodyEnd();
+  
+  if (MAI->hasDotTypeDotSizeDirective())
+    O << "\t.size\t" << *CurrentFnSym << ", .-" << *CurrentFnSym << '\n';
+  
+  // Emit post-function debug information.
+  if (MAI->doesSupportDebugInformation() || MAI->doesSupportExceptionHandling())
+    DW->EndFunction(MF);
+  
+  // Print out jump tables referenced by the function.
+  EmitJumpTableInfo();
+  
+  OutStreamer.AddBlankLine();
+}
+
 
 bool AsmPrinter::doFinalization(Module &M) {
   // Emit global variables.
@@ -318,7 +488,7 @@ bool AsmPrinter::doFinalization(Module &M) {
     }
   }
 
-  if (MAI->getSetDirective()) {
+  if (MAI->hasSetDirective()) {
     OutStreamer.AddBlankLine();
     for (Module::const_alias_iterator I = M.alias_begin(), E = M.alias_end();
          I != E; ++I) {
@@ -334,9 +504,11 @@ bool AsmPrinter::doFinalization(Module &M) {
       else
         assert(I->hasLocalLinkage() && "Invalid alias linkage");
 
-      printVisibility(Name, I->getVisibility());
+      EmitVisibility(Name, I->getVisibility());
 
-      O << MAI->getSetDirective() << ' ' << *Name << ", " << *Target << '\n';
+      // Emit the directives as assignments aka .set:
+      OutStreamer.EmitAssignment(Name, 
+                                 MCSymbolRefExpr::Create(Target, OutContext));
     }
   }
 
@@ -388,7 +560,8 @@ namespace {
 /// used to print out constants which have been "spilled to memory" by
 /// the code generator.
 ///
-void AsmPrinter::EmitConstantPool(MachineConstantPool *MCP) {
+void AsmPrinter::EmitConstantPool() {
+  const MachineConstantPool *MCP = MF->getConstantPool();
   const std::vector<MachineConstantPoolEntry> &CP = MCP->getConstants();
   if (CP.empty()) return;
 
@@ -475,15 +648,15 @@ void AsmPrinter::EmitConstantPool(MachineConstantPool *MCP) {
 /// EmitJumpTableInfo - Print assembly representations of the jump tables used
 /// by the current function to the current output stream.  
 ///
-void AsmPrinter::EmitJumpTableInfo(MachineFunction &MF) {
-  MachineJumpTableInfo *MJTI = MF.getJumpTableInfo();
+void AsmPrinter::EmitJumpTableInfo() {
+  const MachineJumpTableInfo *MJTI = MF->getJumpTableInfo();
   if (MJTI == 0) return;
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   if (JT.empty()) return;
 
   // Pick the directive to use to print the jump table entries, and switch to 
   // the appropriate section.
-  const Function *F = MF.getFunction();
+  const Function *F = MF->getFunction();
   bool JTInDiffSection = false;
   if (// In PIC mode, we need to emit the jump table to the same section as the
       // function body itself, otherwise the label differences won't make sense.
@@ -494,8 +667,7 @@ void AsmPrinter::EmitJumpTableInfo(MachineFunction &MF) {
       // FIXME: this isn't the right predicate, should be based on the MCSection
       // for the function.
       F->isWeakForLinker()) {
-    OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang,
-                                                                    TM));
+    OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F,Mang,TM));
   } else {
     // Otherwise, drop it in the readonly section.
     const MCSection *ReadOnlySection = 
@@ -516,18 +688,19 @@ void AsmPrinter::EmitJumpTableInfo(MachineFunction &MF) {
     // .set directive for each unique entry.  This reduces the number of
     // relocations the assembler will generate for the jump table.
     if (MJTI->getEntryKind() == MachineJumpTableInfo::EK_LabelDifference32 &&
-        MAI->getSetDirective()) {
+        MAI->hasSetDirective()) {
       SmallPtrSet<const MachineBasicBlock*, 16> EmittedSets;
       const TargetLowering *TLI = TM.getTargetLowering();
-      const MCExpr *Base = TLI->getPICJumpTableRelocBaseExpr(&MF, JTI,
-                                                             OutContext);
+      const MCExpr *Base = TLI->getPICJumpTableRelocBaseExpr(MF,JTI,OutContext);
       for (unsigned ii = 0, ee = JTBBs.size(); ii != ee; ++ii) {
         const MachineBasicBlock *MBB = JTBBs[ii];
         if (!EmittedSets.insert(MBB)) continue;
         
-        O << MAI->getSetDirective() << ' '
-          << *GetJTSetSymbol(JTI, MBB->getNumber()) << ','
-          << *MBB->getSymbol(OutContext) << '-' << *Base << '\n';
+        // .set LJTSet, LBB32-base
+        const MCExpr *LHS =
+          MCSymbolRefExpr::Create(MBB->getSymbol(OutContext), OutContext);
+        OutStreamer.EmitAssignment(GetJTSetSymbol(JTI, MBB->getNumber()),
+                                MCBinaryExpr::CreateSub(LHS, Base, OutContext));
       }
     }          
     
@@ -584,7 +757,7 @@ void AsmPrinter::EmitJumpTableEntry(const MachineJumpTableInfo *MJTI,
     // If we have emitted set directives for the jump table entries, print 
     // them rather than the entries themselves.  If we're emitting PIC, then
     // emit the table entries as differences between two text section labels.
-    if (MAI->getSetDirective()) {
+    if (MAI->hasSetDirective()) {
       // If we used .set, reference the .set's symbol.
       Value = MCSymbolRefExpr::Create(GetJTSetSymbol(UID, MBB->getNumber()),
                                       OutContext);
@@ -774,15 +947,18 @@ static const MCExpr *LowerConstant(const Constant *CV, AsmPrinter &AP) {
   }
   
   switch (CE->getOpcode()) {
-  case Instruction::ZExt:
-  case Instruction::SExt:
-  case Instruction::FPTrunc:
-  case Instruction::FPExt:
-  case Instruction::UIToFP:
-  case Instruction::SIToFP:
-  case Instruction::FPToUI:
-  case Instruction::FPToSI:
-  default: llvm_unreachable("FIXME: Don't support this constant cast expr");
+  default:
+    // If the code isn't optimized, there may be outstanding folding
+    // opportunities. Attempt to fold the expression using TargetData as a
+    // last resort before giving up.
+    if (Constant *C =
+          ConstantFoldConstantExpression(CE, AP.TM.getTargetData()))
+      if (C != CE)
+        return LowerConstant(C, AP);
+#ifndef NDEBUG
+    CE->dump();
+#endif
+    llvm_unreachable("FIXME: Don't support this constant expr");
   case Instruction::GetElementPtr: {
     const TargetData &TD = *AP.TM.getTargetData();
     // Generate a symbolic expression for the byte address
@@ -846,8 +1022,14 @@ static const MCExpr *LowerConstant(const Constant *CV, AsmPrinter &AP) {
     return MCBinaryExpr::CreateAnd(OpExpr, MaskExpr, Ctx);
   }
       
+  // The MC library also has a right-shift operator, but it isn't consistently
+  // signed or unsigned between different targets.
   case Instruction::Add:
   case Instruction::Sub:
+  case Instruction::Mul:
+  case Instruction::SDiv:
+  case Instruction::SRem:
+  case Instruction::Shl:
   case Instruction::And:
   case Instruction::Or:
   case Instruction::Xor: {
@@ -857,6 +1039,10 @@ static const MCExpr *LowerConstant(const Constant *CV, AsmPrinter &AP) {
     default: llvm_unreachable("Unknown binary operator constant cast expr");
     case Instruction::Add: return MCBinaryExpr::CreateAdd(LHS, RHS, Ctx);
     case Instruction::Sub: return MCBinaryExpr::CreateSub(LHS, RHS, Ctx);
+    case Instruction::Mul: return MCBinaryExpr::CreateMul(LHS, RHS, Ctx);
+    case Instruction::SDiv: return MCBinaryExpr::CreateDiv(LHS, RHS, Ctx);
+    case Instruction::SRem: return MCBinaryExpr::CreateMod(LHS, RHS, Ctx);
+    case Instruction::Shl: return MCBinaryExpr::CreateShl(LHS, RHS, Ctx);
     case Instruction::And: return MCBinaryExpr::CreateAnd(LHS, RHS, Ctx);
     case Instruction::Or:  return MCBinaryExpr::CreateOr (LHS, RHS, Ctx);
     case Instruction::Xor: return MCBinaryExpr::CreateXor(LHS, RHS, Ctx);
@@ -1007,6 +1193,7 @@ static void EmitGlobalConstantLargeInt(const ConstantInt *CI,
 void AsmPrinter::EmitGlobalConstant(const Constant *CV, unsigned AddrSpace) {
   if (isa<ConstantAggregateZero>(CV) || isa<UndefValue>(CV)) {
     uint64_t Size = TM.getTargetData()->getTypeAllocSize(CV->getType());
+    if (Size == 0) Size = 1; // An empty "_foo:" followed by a section is undef.
     return OutStreamer.EmitZeros(Size, AddrSpace);
   }
 
@@ -1315,6 +1502,7 @@ void AsmPrinter::printInlineAsm(const MachineInstr *MI) const {
     }
   }
   O << "\n\t" << MAI->getCommentString() << MAI->getInlineAsmEnd();
+  OutStreamer.AddBlankLine();
 }
 
 /// printImplicitDef - This method prints the specified machine instruction
@@ -1324,6 +1512,7 @@ void AsmPrinter::printImplicitDef(const MachineInstr *MI) const {
   O.PadToColumn(MAI->getCommentColumn());
   O << MAI->getCommentString() << " implicit-def: "
     << TRI->getName(MI->getOperand(0).getReg());
+  OutStreamer.AddBlankLine();
 }
 
 void AsmPrinter::printKill(const MachineInstr *MI) const {
@@ -1335,12 +1524,14 @@ void AsmPrinter::printKill(const MachineInstr *MI) const {
     assert(op.isReg() && "KILL instruction must have only register operands");
     O << ' ' << TRI->getName(op.getReg()) << (op.isDef() ? "<def>" : "<kill>");
   }
+  OutStreamer.AddBlankLine();
 }
 
 /// printLabel - This method prints a local label used by debug and
 /// exception handling tables.
-void AsmPrinter::printLabel(const MachineInstr *MI) const {
+void AsmPrinter::printLabelInst(const MachineInstr *MI) const {
   printLabel(MI->getOperand(0).getImm());
+  OutStreamer.AddBlankLine();
 }
 
 void AsmPrinter::printLabel(unsigned Id) const {
@@ -1363,14 +1554,12 @@ bool AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI, unsigned OpNo,
   return true;
 }
 
-MCSymbol *AsmPrinter::GetBlockAddressSymbol(const BlockAddress *BA,
-                                            const char *Suffix) const {
-  return GetBlockAddressSymbol(BA->getFunction(), BA->getBasicBlock(), Suffix);
+MCSymbol *AsmPrinter::GetBlockAddressSymbol(const BlockAddress *BA) const {
+  return GetBlockAddressSymbol(BA->getFunction(), BA->getBasicBlock());
 }
 
 MCSymbol *AsmPrinter::GetBlockAddressSymbol(const Function *F,
-                                            const BasicBlock *BB,
-                                            const char *Suffix) const {
+                                            const BasicBlock *BB) const {
   assert(BB->hasName() &&
          "Address of anonymous basic block not supported yet!");
 
@@ -1384,7 +1573,7 @@ MCSymbol *AsmPrinter::GetBlockAddressSymbol(const Function *F,
   SmallString<60> NameResult;
   Mang->getNameWithPrefix(NameResult,
                           StringRef("BA") + Twine((unsigned)FnName.size()) + 
-                          "_" + FnName.str() + "_" + BB->getName() + Suffix, 
+                          "_" + FnName.str() + "_" + BB->getName(), 
                           Mangler::Private);
 
   return OutContext.GetOrCreateSymbol(NameResult.str());
@@ -1468,7 +1657,7 @@ static void PrintChildLoopComment(raw_ostream &OS, const MachineLoop *Loop,
   }
 }
 
-/// EmitComments - Pretty-print comments for basic blocks.
+/// PrintBasicBlockLoopComments - Pretty-print comments for basic blocks.
 static void PrintBasicBlockLoopComments(const MachineBasicBlock &MBB,
                                         const MachineLoopInfo *LI,
                                         const AsmPrinter &AP) {
@@ -1551,7 +1740,7 @@ void AsmPrinter::EmitBasicBlockStart(const MachineBasicBlock *MBB) const {
   }
 }
 
-void AsmPrinter::printVisibility(MCSymbol *Sym, unsigned Visibility) const {
+void AsmPrinter::EmitVisibility(MCSymbol *Sym, unsigned Visibility) const {
   MCSymbolAttr Attr = MCSA_Invalid;
   
   switch (Visibility) {
@@ -1599,86 +1788,3 @@ GCMetadataPrinter *AsmPrinter::GetOrCreateGCPrinter(GCStrategy *S) {
   return 0;
 }
 
-/// EmitComments - Pretty-print comments for instructions
-void AsmPrinter::EmitComments(const MachineInstr &MI) const {
-  if (!VerboseAsm)
-    return;
-
-  bool Newline = false;
-
-  if (!MI.getDebugLoc().isUnknown()) {
-    DILocation DLT = MF->getDILocation(MI.getDebugLoc());
-
-    // Print source line info.
-    O.PadToColumn(MAI->getCommentColumn());
-    O << MAI->getCommentString() << ' ';
-    DIScope Scope = DLT.getScope();
-    // Omit the directory, because it's likely to be long and uninteresting.
-    if (!Scope.isNull())
-      O << Scope.getFilename();
-    else
-      O << "<unknown>";
-    O << ':' << DLT.getLineNumber();
-    if (DLT.getColumnNumber() != 0)
-      O << ':' << DLT.getColumnNumber();
-    Newline = true;
-  }
-
-  // Check for spills and reloads
-  int FI;
-
-  const MachineFrameInfo *FrameInfo =
-    MI.getParent()->getParent()->getFrameInfo();
-
-  // We assume a single instruction only has a spill or reload, not
-  // both.
-  const MachineMemOperand *MMO;
-  if (TM.getInstrInfo()->isLoadFromStackSlotPostFE(&MI, FI)) {
-    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
-      MMO = *MI.memoperands_begin();
-      if (Newline) O << '\n';
-      O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << ' ' << MMO->getSize() << "-byte Reload";
-      Newline = true;
-    }
-  }
-  else if (TM.getInstrInfo()->hasLoadFromStackSlot(&MI, MMO, FI)) {
-    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
-      if (Newline) O << '\n';
-      O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << ' '
-        << MMO->getSize() << "-byte Folded Reload";
-      Newline = true;
-    }
-  }
-  else if (TM.getInstrInfo()->isStoreToStackSlotPostFE(&MI, FI)) {
-    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
-      MMO = *MI.memoperands_begin();
-      if (Newline) O << '\n';
-      O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << ' ' << MMO->getSize() << "-byte Spill";
-      Newline = true;
-    }
-  }
-  else if (TM.getInstrInfo()->hasStoreToStackSlot(&MI, MMO, FI)) {
-    if (FrameInfo->isSpillSlotObjectIndex(FI)) {
-      if (Newline) O << '\n';
-      O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << ' '
-        << MMO->getSize() << "-byte Folded Spill";
-      Newline = true;
-    }
-  }
-
-  // Check for spill-induced copies
-  unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
-  if (TM.getInstrInfo()->isMoveInstr(MI, SrcReg, DstReg,
-                                      SrcSubIdx, DstSubIdx)) {
-    if (MI.getAsmPrinterFlag(ReloadReuse)) {
-      if (Newline) O << '\n';
-      O.PadToColumn(MAI->getCommentColumn());
-      O << MAI->getCommentString() << " Reload Reuse";
-    }
-  }
-}
-
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
index 349e0ac..63360c0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
@@ -313,6 +313,7 @@ void DIESectionOffset::EmitValue(DwarfPrinter *D, unsigned Form) const {
   D->EmitSectionOffset(Label.getTag(), Section.getTag(),
                        Label.getNumber(), Section.getNumber(),
                        IsSmall, IsEH, UseSet);
+  D->getAsm()->O << '\n'; // FIXME: Necesssary?
 }
 
 /// SizeOf - Determine size of delta value in bytes.
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index 532a68f..5093dd9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -26,6 +26,7 @@
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/Support/Timer.h"
 #include "llvm/System/Path.h"
@@ -166,7 +167,8 @@ public:
 class DbgScope {
   DbgScope *Parent;                   // Parent to this scope.
   DIDescriptor Desc;                  // Debug info descriptor for scope.
-  MDNode * InlinedAtLocation;           // Location at which scope is inlined.
+  // Location at which this scope is inlined.
+  AssertingVH<MDNode> InlinedAtLocation;  
   bool AbstractScope;                 // Abstract Scope
   unsigned StartLabelID;              // Label ID of the beginning of scope.
   unsigned EndLabelID;                // Label ID of the end of scope.
@@ -189,7 +191,7 @@ public:
   void setParent(DbgScope *P)          { Parent = P; }
   DIDescriptor getDesc()         const { return Desc; }
   MDNode *getInlinedAt()         const {
-    return dyn_cast_or_null<MDNode>(InlinedAtLocation);
+    return InlinedAtLocation;
   }
   MDNode *getScopeNode()         const { return Desc.getNode(); }
   unsigned getStartLabelID()     const { return StartLabelID; }
@@ -616,7 +618,7 @@ void DwarfDebug::addComplexAddress(DbgVariable *&DV, DIE *Die,
 
    1).  Add the offset of the forwarding field.
 
-   2).  Follow that pointer to get the the real __Block_byref_x_VarName
+   2).  Follow that pointer to get the real __Block_byref_x_VarName
    struct to use (the real one may have been copied onto the heap).
 
    3).  Add the offset for the field VarName, to find the actual variable.
@@ -937,7 +939,16 @@ void DwarfDebug::constructTypeDIE(DIE &Buffer, DICompositeType CTy) {
       DIE *ElemDie = NULL;
       if (Element.getTag() == dwarf::DW_TAG_subprogram)
         ElemDie = createSubprogramDIE(DISubprogram(Element.getNode()));
-      else
+      else if (Element.getTag() == dwarf::DW_TAG_auto_variable) {
+        DIVariable DV(Element.getNode());
+        ElemDie = new DIE(dwarf::DW_TAG_variable);
+        addString(ElemDie, dwarf::DW_AT_name, dwarf::DW_FORM_string,
+                  DV.getName());
+        addType(ElemDie, DV.getType());
+        addUInt(ElemDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
+        addUInt(ElemDie, dwarf::DW_AT_external, dwarf::DW_FORM_flag, 1);
+        addSourceLine(ElemDie, &DV);
+      } else
         ElemDie = createMemberDIE(DIDerivedType(Element.getNode()));
       Buffer.addChild(ElemDie);
     }
@@ -949,6 +960,11 @@ void DwarfDebug::constructTypeDIE(DIE &Buffer, DICompositeType CTy) {
     if (RLang)
       addUInt(&Buffer, dwarf::DW_AT_APPLE_runtime_class,
               dwarf::DW_FORM_data1, RLang);
+
+    DICompositeType ContainingType = CTy.getContainingType();
+    if (!ContainingType.isNull())
+      addDIEEntry(&Buffer, dwarf::DW_AT_containing_type, dwarf::DW_FORM_ref4, 
+                  getOrCreateTypeDIE(DIType(ContainingType.getNode())));
     break;
   }
   default:
@@ -959,7 +975,7 @@ void DwarfDebug::constructTypeDIE(DIE &Buffer, DICompositeType CTy) {
   if (!Name.empty())
     addString(&Buffer, dwarf::DW_AT_name, dwarf::DW_FORM_string, Name);
 
-  if (Tag == dwarf::DW_TAG_enumeration_type ||
+  if (Tag == dwarf::DW_TAG_enumeration_type || Tag == dwarf::DW_TAG_class_type ||
       Tag == dwarf::DW_TAG_structure_type || Tag == dwarf::DW_TAG_union_type) {
     // Add size if non-zero (derived types might be zero-sized.)
     if (Size)
@@ -1107,7 +1123,26 @@ DIE *DwarfDebug::createMemberDIE(const DIDerivedType &DT) {
     // This is not a bitfield.
     addUInt(MemLocationDie, 0, dwarf::DW_FORM_udata, DT.getOffsetInBits() >> 3);
 
-  addBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, MemLocationDie);
+  if (DT.getTag() == dwarf::DW_TAG_inheritance
+      && DT.isVirtual()) {
+
+    // For C++, virtual base classes are not at fixed offset. Use following
+    // expression to extract appropriate offset from vtable.
+    // BaseAddr = ObAddr + *((*ObAddr) - Offset)
+
+    DIEBlock *VBaseLocationDie = new DIEBlock();
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_dup);
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_constu);
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_udata, DT.getOffsetInBits());
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_minus);
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_deref);
+    addUInt(VBaseLocationDie, 0, dwarf::DW_FORM_data1, dwarf::DW_OP_plus);
+
+    addBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, 
+             VBaseLocationDie);
+  } else
+    addBlock(MemberDie, dwarf::DW_AT_data_member_location, 0, MemLocationDie);
 
   if (DT.isProtected())
     addUInt(MemberDie, dwarf::DW_AT_accessibility, dwarf::DW_FORM_flag,
@@ -1179,12 +1214,17 @@ DIE *DwarfDebug::createSubprogramDIE(const DISubprogram &SP, bool MakeDecl) {
     if (SPTag == dwarf::DW_TAG_subroutine_type)
       for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
         DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
-        addType(Arg, DIType(Args.getElement(i).getNode()));
-        addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
+        DIType ATy = DIType(DIType(Args.getElement(i).getNode()));
+        addType(Arg, ATy);
+        if (ATy.isArtificial())
+          addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1);
         SPDie->addChild(Arg);
       }
   }
 
+  if (SP.isArtificial())
+    addUInt(SPDie, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1);
+
   // DW_TAG_inlined_subroutine may refer to this DIE.
   ModuleCU->insertDIE(SP.getNode(), SPDie);
   return SPDie;
@@ -1289,7 +1329,13 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
  DIE *SPDie = ModuleCU->getDIE(SPNode);
  assert (SPDie && "Unable to find subprogram DIE!");
  DISubprogram SP(SPNode);
- if (SP.isDefinition() && !SP.getContext().isCompileUnit()) {
+ // There is not any need to generate specification DIE for a function
+ // defined at compile unit level. If a function is defined inside another
+ // function then gdb prefers the definition at top level and but does not
+ // expect specification DIE in parent function. So avoid creating 
+ // specification DIE for a function defined inside a function.
+ if (SP.isDefinition() && !SP.getContext().isCompileUnit()
+     && !SP.getContext().isSubprogram()) {
    addUInt(SPDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
   // Add arguments. 
    DICompositeType SPTy = SP.getType();
@@ -1298,8 +1344,10 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
    if (SPTag == dwarf::DW_TAG_subroutine_type)
      for (unsigned i = 1, N =  Args.getNumElements(); i < N; ++i) {
        DIE *Arg = new DIE(dwarf::DW_TAG_formal_parameter);
-       addType(Arg, DIType(Args.getElement(i).getNode()));
-       addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1); // ??
+       DIType ATy = DIType(DIType(Args.getElement(i).getNode()));
+       addType(Arg, ATy);
+       if (ATy.isArtificial())
+         addUInt(Arg, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1);
        SPDie->addChild(Arg);
      }
    DIE *SPDeclDie = SPDie;
@@ -1308,7 +1356,7 @@ DIE *DwarfDebug::updateSubprogramScopeDIE(MDNode *SPNode) {
                SPDeclDie);
    ModuleCU->addDie(SPDie);
  }
-   
+
  addLabel(SPDie, dwarf::DW_AT_low_pc, dwarf::DW_FORM_addr,
           DWLabel("func_begin", SubprogramCount));
  addLabel(SPDie, dwarf::DW_AT_high_pc, dwarf::DW_FORM_addr,
@@ -1471,6 +1519,9 @@ DIE *DwarfDebug::constructVariableDIE(DbgVariable *DV, DbgScope *Scope) {
     else
       addAddress(VariableDie, dwarf::DW_AT_location, Location);
   }
+
+  if (Tag == dwarf::DW_TAG_formal_parameter && VD.getType().isArtificial())
+    addUInt(VariableDie, dwarf::DW_AT_artificial, dwarf::DW_FORM_flag, 1);
   DV->setDIE(VariableDie);
   return VariableDie;
 
@@ -1669,6 +1720,7 @@ void DwarfDebug::constructGlobalVariableDIE(MDNode *N) {
     addObjectLabel(Block, 0, dwarf::DW_FORM_udata,
                    Asm->GetGlobalValueSymbol(DI_GV.getGlobal()));
     addBlock(VariableSpecDIE, dwarf::DW_AT_location, 0, Block);
+    addUInt(VariableDie, dwarf::DW_AT_declaration, dwarf::DW_FORM_flag, 1);
     ModuleCU->addDie(VariableSpecDIE);
   } else {
     DIEBlock *Block = new DIEBlock();
@@ -1985,7 +2037,7 @@ void DwarfDebug::createDbgScope(MDNode *Scope, MDNode *InlinedAt) {
 
 /// extractScopeInformation - Scan machine instructions in this function
 /// and collect DbgScopes. Return true, if atleast one scope was found.
-bool DwarfDebug::extractScopeInformation(MachineFunction *MF) {
+bool DwarfDebug::extractScopeInformation() {
   // If scope information was extracted using .dbg intrinsics then there is not
   // any need to extract these information by scanning each instruction.
   if (!DbgScopeMap.empty())
@@ -2080,7 +2132,7 @@ bool DwarfDebug::extractScopeInformation(MachineFunction *MF) {
 
 /// beginFunction - Gather pre-function debug information.  Assumes being
 /// emitted immediately after the function entry point.
-void DwarfDebug::beginFunction(MachineFunction *MF) {
+void DwarfDebug::beginFunction(const MachineFunction *MF) {
   this->MF = MF;
 
   if (!ShouldEmitDwarfDebug()) return;
@@ -2088,14 +2140,11 @@ void DwarfDebug::beginFunction(MachineFunction *MF) {
   if (TimePassesIsEnabled)
     DebugTimer->startTimer();
 
-  if (!extractScopeInformation(MF))
+  if (!extractScopeInformation())
     return;
 
   collectVariableInfo();
 
-  // Begin accumulating function debug information.
-  MMI->BeginFunction(MF);
-
   // Assumes in correct section after the entry point.
   EmitLabel("func_begin", ++SubprogramCount);
 
@@ -2122,7 +2171,7 @@ void DwarfDebug::beginFunction(MachineFunction *MF) {
 
 /// endFunction - Gather and emit post-function debug information.
 ///
-void DwarfDebug::endFunction(MachineFunction *MF) {
+void DwarfDebug::endFunction(const MachineFunction *MF) {
   if (!ShouldEmitDwarfDebug()) return;
 
   if (TimePassesIsEnabled)
@@ -2768,7 +2817,8 @@ void DwarfDebug::emitDebugPubTypes() {
 
   EmitLabel("pubtypes_begin", ModuleCU->getID());
 
-  Asm->EmitInt16(dwarf::DWARF_VERSION); EOL("DWARF Version");
+  if (Asm->VerboseAsm) Asm->OutStreamer.AddComment("DWARF Version");
+  Asm->EmitInt16(dwarf::DWARF_VERSION);
 
   EmitSectionOffset("info_begin", "section_info",
                     ModuleCU->getID(), 0, true, false);
@@ -2784,10 +2834,11 @@ void DwarfDebug::emitDebugPubTypes() {
     const char *Name = GI->getKeyData();
     DIE * Entity = GI->second;
 
-    Asm->EmitInt32(Entity->getOffset()); EOL("DIE offset");
+    if (Asm->VerboseAsm) Asm->OutStreamer.AddComment("DIE offset");
+    Asm->EmitInt32(Entity->getOffset());
     
     if (Asm->VerboseAsm) Asm->OutStreamer.AddComment("External Name");
-    Asm->OutStreamer.EmitBytes(StringRef(Name, strlen(Name)), 0);
+    Asm->OutStreamer.EmitBytes(StringRef(Name, GI->getKeyLength()+1), 0);
   }
 
   Asm->EmitInt32(0); EOL("End Mark");
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index e723621..55baa92 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -103,7 +103,7 @@ class DwarfDebug : public DwarfPrinter {
   ///
   SmallVector<std::pair<unsigned, unsigned>, 8> SourceIds;
 
-  /// Lines - List of of source line correspondence.
+  /// Lines - List of source line correspondence.
   std::vector<SrcLineInfo> Lines;
 
   /// DIEValues - A list of all the unique values in use.
@@ -523,11 +523,11 @@ public:
 
   /// beginFunction - Gather pre-function debug information.  Assumes being
   /// emitted immediately after the function entry point.
-  void beginFunction(MachineFunction *MF);
+  void beginFunction(const MachineFunction *MF);
 
   /// endFunction - Gather and emit post-function debug information.
   ///
-  void endFunction(MachineFunction *MF);
+  void endFunction(const MachineFunction *MF);
 
   /// recordSourceLine - Records location information and associates it with a 
   /// label. Returns a unique label ID used to generate a label and provide
@@ -550,7 +550,7 @@ public:
 
   /// extractScopeInformation - Scan machine instructions in this function
   /// and collect DbgScopes. Return true, if atleast one scope was found.
-  bool extractScopeInformation(MachineFunction *MF);
+  bool extractScopeInformation();
 
   /// collectVariableInfo - Populate DbgScope entries with variables' info.
   void collectVariableInfo();
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
index 2ae16c0..22b1b1c 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.cpp
@@ -34,6 +34,7 @@
 #include "llvm/Support/Timer.h"
 #include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/StringExtras.h"
+#include "llvm/ADT/Twine.h"
 using namespace llvm;
 
 DwarfException::DwarfException(raw_ostream &OS, AsmPrinter *A,
@@ -406,20 +407,22 @@ ComputeActionsTable(const SmallVectorImpl<const LandingPadInfo*> &LandingPads,
 
     if (NumShared < TypeIds.size()) {
       unsigned SizeAction = 0;
-      ActionEntry *PrevAction = 0;
+      unsigned PrevAction = (unsigned)-1;
 
       if (NumShared) {
         const unsigned SizePrevIds = PrevLPI->TypeIds.size();
         assert(Actions.size());
-        PrevAction = &Actions.back();
-        SizeAction = MCAsmInfo::getSLEB128Size(PrevAction->NextAction) +
-          MCAsmInfo::getSLEB128Size(PrevAction->ValueForTypeID);
+        PrevAction = Actions.size() - 1;
+        SizeAction =
+          MCAsmInfo::getSLEB128Size(Actions[PrevAction].NextAction) +
+          MCAsmInfo::getSLEB128Size(Actions[PrevAction].ValueForTypeID);
 
         for (unsigned j = NumShared; j != SizePrevIds; ++j) {
+          assert(PrevAction != (unsigned)-1 && "PrevAction is invalid!");
           SizeAction -=
-            MCAsmInfo::getSLEB128Size(PrevAction->ValueForTypeID);
-          SizeAction += -PrevAction->NextAction;
-          PrevAction = PrevAction->Previous;
+            MCAsmInfo::getSLEB128Size(Actions[PrevAction].ValueForTypeID);
+          SizeAction += -Actions[PrevAction].NextAction;
+          PrevAction = Actions[PrevAction].Previous;
         }
       }
 
@@ -436,7 +439,7 @@ ComputeActionsTable(const SmallVectorImpl<const LandingPadInfo*> &LandingPads,
 
         ActionEntry Action = { ValueForTypeID, NextAction, PrevAction };
         Actions.push_back(Action);
-        PrevAction = &Actions.back();
+        PrevAction = Actions.size() - 1;
       }
 
       // Record the first action of the landing pad site.
@@ -446,7 +449,7 @@ ComputeActionsTable(const SmallVectorImpl<const LandingPadInfo*> &LandingPads,
     // Information used when created the call-site table. The action record
     // field of the call site record is the offset of the first associated
     // action record, relative to the start of the actions table. This value is
-    // biased by 1 (1 in dicating the start of the actions table), and 0
+    // biased by 1 (1 indicating the start of the actions table), and 0
     // indicates that there are no actions.
     FirstActions.push_back(FirstAction);
 
@@ -579,7 +582,16 @@ ComputeCallSiteTable(SmallVectorImpl<CallSiteEntry> &CallSites,
         }
 
         // Otherwise, create a new call-site.
-        CallSites.push_back(Site);
+        if (MAI->getExceptionHandlingType() == ExceptionHandling::Dwarf)
+          CallSites.push_back(Site);
+        else {
+          // SjLj EH must maintain the call sites in the order assigned
+          // to them by the SjLjPrepare pass.
+          unsigned SiteNo = MMI->getCallSiteBeginLabel(BeginLabel);
+          if (CallSites.size() < SiteNo)
+            CallSites.resize(SiteNo);
+          CallSites[SiteNo - 1] = Site;
+        }
         PreviousIsInvoke = true;
       } else {
         // Create a gap.
@@ -638,8 +650,7 @@ void DwarfException::EmitExceptionTable() {
   // landing pad site.
   SmallVector<ActionEntry, 32> Actions;
   SmallVector<unsigned, 64> FirstActions;
-  unsigned SizeActions = ComputeActionsTable(LandingPads, Actions,
-                                             FirstActions);
+  unsigned SizeActions=ComputeActionsTable(LandingPads, Actions, FirstActions);
 
   // Invokes and nounwind calls have entries in PadMap (due to being bracketed
   // by try-range labels when lowered).  Ordinary calls do not, so appropriate
@@ -752,7 +763,7 @@ void DwarfException::EmitExceptionTable() {
   // does, instead output it before the table.
   unsigned SizeTypes = TypeInfos.size() * TypeFormatSize;
   unsigned TyOffset = sizeof(int8_t) +          // Call site format
-    MCAsmInfo::getULEB128Size(SizeSites) +      // Call-site table length
+    MCAsmInfo::getULEB128Size(SizeSites) +      // Call site table length
     SizeSites + SizeActions + SizeTypes;
   unsigned TotalSize = sizeof(int8_t) +         // LPStart format
                        sizeof(int8_t) +         // TType format
@@ -826,7 +837,7 @@ void DwarfException::EmitExceptionTable() {
 
     // Emit the landing pad call site table.
     EmitEncodingByte(dwarf::DW_EH_PE_udata4, "Call site");
-    EmitULEB128(SizeSites, "Call site table size");
+    EmitULEB128(SizeSites, "Call site table length");
 
     for (SmallVectorImpl<CallSiteEntry>::const_iterator
          I = CallSites.begin(), E = CallSites.end(); I != E; ++I) {
@@ -859,13 +870,14 @@ void DwarfException::EmitExceptionTable() {
 
       // Offset of the landing pad, counted in 16-byte bundles relative to the
       // @LPStart address.
-      if (!S.PadLabel)
+      if (!S.PadLabel) {
+        Asm->OutStreamer.AddComment("Landing pad");
         Asm->OutStreamer.EmitIntValue(0, 4/*size*/, 0/*addrspace*/);
-      else
+      } else {
         EmitSectionOffset("label", "eh_func_begin", S.PadLabel, SubprogramCount,
                           true, true);
-
-      EOL("Landing pad");
+        EOL("Landing pad");
+      }
 
       // Offset of the first associated action record, relative to the start of
       // the action table. This value is biased by 1 (1 indicates the start of
@@ -875,38 +887,43 @@ void DwarfException::EmitExceptionTable() {
   }
 
   // Emit the Action Table.
+  if (Actions.size() != 0) EOL("-- Action Record Table --");
   for (SmallVectorImpl<ActionEntry>::const_iterator
          I = Actions.begin(), E = Actions.end(); I != E; ++I) {
     const ActionEntry &Action = *I;
+    EOL("Action Record:");
 
     // Type Filter
     //
     //   Used by the runtime to match the type of the thrown exception to the
     //   type of the catch clauses or the types in the exception specification.
-    EmitSLEB128(Action.ValueForTypeID, "TypeInfo index");
+    EmitSLEB128(Action.ValueForTypeID, "  TypeInfo index");
 
     // Action Record
     //
     //   Self-relative signed displacement in bytes of the next action record,
     //   or 0 if there is no next action record.
-    EmitSLEB128(Action.NextAction, "Next action");
+    EmitSLEB128(Action.NextAction, "  Next action");
   }
 
   // Emit the Catch TypeInfos.
+  if (!TypeInfos.empty()) EOL("-- Catch TypeInfos --");
   for (std::vector<GlobalVariable *>::const_reverse_iterator
          I = TypeInfos.rbegin(), E = TypeInfos.rend(); I != E; ++I) {
     const GlobalVariable *GV = *I;
     PrintRelDirective();
 
-    if (GV)
+    if (GV) {
       O << *Asm->GetGlobalValueSymbol(GV);
-    else
+      EOL("TypeInfo");
+    } else {
       O << "0x0";
-
-    EOL("TypeInfo");
+      EOL("");
+    }
   }
 
   // Emit the Exception Specifications.
+  if (!FilterIds.empty()) EOL("-- Filter IDs --");
   for (std::vector<unsigned>::const_iterator
          I = FilterIds.begin(), E = FilterIds.end(); I < E; ++I) {
     unsigned TypeID = *I;
@@ -943,7 +960,7 @@ void DwarfException::EndModule() {
 
 /// BeginFunction - Gather pre-function exception information. Assumes it's
 /// being emitted immediately after the function entry point.
-void DwarfException::BeginFunction(MachineFunction *MF) {
+void DwarfException::BeginFunction(const MachineFunction *MF) {
   if (!MMI || !MAI->doesSupportExceptionHandling()) return;
 
   if (TimePassesIsEnabled)
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
index 3921e91..6177d26 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfException.h
@@ -103,7 +103,7 @@ class DwarfException : public DwarfPrinter {
   ///     exception.  If it matches then the exception and type id are passed
   ///     on to the landing pad.  Otherwise the next action is looked up.  This
   ///     chain is terminated with a next action of zero.  If no type id is
-  ///     found the the frame is unwound and handling continues.
+  ///     found the frame is unwound and handling continues.
   ///  3. Type id table contains references to all the C++ typeinfo for all
   ///     catches in the function.  This tables is reversed indexed base 1.
 
@@ -135,7 +135,7 @@ class DwarfException : public DwarfPrinter {
   struct ActionEntry {
     int ValueForTypeID; // The value to write - may not be equal to the type id.
     int NextAction;
-    struct ActionEntry *Previous;
+    unsigned Previous;
   };
 
   /// CallSiteEntry - Structure describing an entry in the call-site table.
@@ -197,7 +197,7 @@ public:
 
   /// BeginFunction - Gather pre-function exception information.  Assumes being
   /// emitted immediately after the function entry point.
-  void BeginFunction(MachineFunction *MF);
+  void BeginFunction(const MachineFunction *MF);
 
   /// EndFunction - Gather and emit post-function exception information.
   void EndFunction();
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.cpp
index f2f444a..4de0b74 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.cpp
@@ -195,13 +195,13 @@ void DwarfPrinter::EmitReference(const MCSymbol *Sym, bool IsPCRelative,
   if (IsPCRelative) O << "-" << MAI->getPCSymbol();
 }
 
-/// EmitDifference - Emit the difference between two labels.  Some assemblers do
-/// not behave with absolute expressions with data directives, so there is an
-/// option (needsSet) to use an intermediary set expression.
+/// EmitDifference - Emit the difference between two labels.  If this assembler
+/// supports .set, we emit a .set of a temporary and then use it in the .word.
 void DwarfPrinter::EmitDifference(const char *TagHi, unsigned NumberHi,
                                   const char *TagLo, unsigned NumberLo,
                                   bool IsSmall) {
-  if (MAI->needsSet()) {
+  if (MAI->hasSetDirective()) {
+    // FIXME: switch to OutStreamer.EmitAssignment.
     O << "\t.set\t";
     PrintLabelName("set", SetCounter, Flavor);
     O << ",";
@@ -232,7 +232,8 @@ void DwarfPrinter::EmitSectionOffset(const char* Label, const char* Section,
   else
     printAbsolute = MAI->isAbsoluteDebugSectionOffsets();
 
-  if (MAI->needsSet() && useSet) {
+  if (MAI->hasSetDirective() && useSet) {
+    // FIXME: switch to OutStreamer.EmitAssignment.
     O << "\t.set\t";
     PrintLabelName("set", SetCounter, Flavor);
     O << ",";
@@ -247,7 +248,6 @@ void DwarfPrinter::EmitSectionOffset(const char* Label, const char* Section,
     PrintRelDirective(IsSmall);
     PrintLabelName("set", SetCounter, Flavor);
     ++SetCounter;
-    O << "\n";
   } else {
     PrintRelDirective(IsSmall, true);
     PrintLabelName(Label, LabelNumber);
@@ -256,7 +256,6 @@ void DwarfPrinter::EmitSectionOffset(const char* Label, const char* Section,
       O << "-";
       PrintLabelName(Section, SectionNumber);
     }
-    O << "\n";
   }
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
index 86fe2ab..69d9c27 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfPrinter.h
@@ -33,6 +33,8 @@ class Twine;
 
 class DwarfPrinter {
 protected:
+  ~DwarfPrinter() {}
+
   //===-------------------------------------------------------------==---===//
   // Core attributes used by the DWARF printer.
   //
@@ -56,7 +58,7 @@ protected:
   Module *M;
 
   /// MF - Current machine function.
-  MachineFunction *MF;
+  const MachineFunction *MF;
 
   /// MMI - Collected machine module information.
   MachineModuleInfo *MMI;
@@ -138,9 +140,7 @@ public:
   void EmitReference(const MCSymbol *Sym, bool IsPCRelative = false,
                      bool Force32Bit = false) const;
 
-  /// EmitDifference - Emit the difference between two labels.  Some
-  /// assemblers do not behave with absolute expressions with data directives,
-  /// so there is an option (needsSet) to use an intermediary set expression.
+  /// EmitDifference - Emit the difference between two labels.
   void EmitDifference(const DWLabel &LabelHi, const DWLabel &LabelLo,
                       bool IsSmall = false) {
     EmitDifference(LabelHi.getTag(), LabelHi.getNumber(),
diff --git a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
index dd8d88a..08e1bbc 100644
--- a/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/AsmPrinter/DwarfWriter.cpp
@@ -57,14 +57,14 @@ void DwarfWriter::EndModule() {
 
 /// BeginFunction - Gather pre-function debug information.  Assumes being
 /// emitted immediately after the function entry point.
-void DwarfWriter::BeginFunction(MachineFunction *MF) {
+void DwarfWriter::BeginFunction(const MachineFunction *MF) {
   DE->BeginFunction(MF);
   DD->beginFunction(MF);
 }
 
 /// EndFunction - Gather and emit post-function debug information.
 ///
-void DwarfWriter::EndFunction(MachineFunction *MF) {
+void DwarfWriter::EndFunction(const MachineFunction *MF) {
   DD->endFunction(MF);
   DE->EndFunction();
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
index 4f76aac..faf4d95 100644
--- a/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/BranchFolding.cpp
@@ -133,7 +133,7 @@ bool BranchFolder::OptimizeImpDefsBlock(MachineBasicBlock *MBB) {
   SmallSet<unsigned, 4> ImpDefRegs;
   MachineBasicBlock::iterator I = MBB->begin();
   while (I != MBB->end()) {
-    if (I->getOpcode() != TargetInstrInfo::IMPLICIT_DEF)
+    if (!I->isImplicitDef())
       break;
     unsigned Reg = I->getOperand(0).getReg();
     ImpDefRegs.insert(Reg);
@@ -340,7 +340,7 @@ static unsigned ComputeCommonTailLength(MachineBasicBlock *MBB1,
         // relative order. This is untenable because normal compiler
         // optimizations (like this one) may reorder and/or merge these
         // directives.
-        I1->getOpcode() == TargetInstrInfo::INLINEASM) {
+        I1->isInlineAsm()) {
       ++I1; ++I2;
       break;
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
index 17072d3..46d3268 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/CodeGen/CMakeLists.txt
@@ -21,7 +21,6 @@ add_llvm_library(LLVMCodeGen
   LiveStackAnalysis.cpp
   LiveVariables.cpp
   LowerSubregs.cpp
-  MachOWriter.cpp
   MachineBasicBlock.cpp
   MachineDominators.cpp
   MachineFunction.cpp
@@ -40,6 +39,7 @@ add_llvm_library(LLVMCodeGen
   ObjectCodeEmitter.cpp
   OcamlGC.cpp
   OptimizeExts.cpp
+  OptimizePHIs.cpp
   PHIElimination.cpp
   Passes.cpp
   PostRASchedulerList.cpp
diff --git a/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
index b8ef219..2bedd04 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CalcSpillWeights.cpp
@@ -20,8 +20,8 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetRegisterInfo.h"
-
 using namespace llvm;
 
 char CalculateSpillWeights::ID = 0;
@@ -58,10 +58,7 @@ bool CalculateSpillWeights::runOnMachineFunction(MachineFunction &fn) {
     for (MachineBasicBlock::const_iterator mii = mbb->begin(), mie = mbb->end();
          mii != mie; ++mii) {
       const MachineInstr *mi = mii;
-      if (tii->isIdentityCopy(*mi))
-        continue;
-
-      if (mi->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
+      if (tii->isIdentityCopy(*mi) || mi->isImplicitDef() || mi->isDebugValue())
         continue;
 
       for (unsigned i = 0, e = mi->getNumOperands(); i != e; ++i) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
index 126700b..a13a310 100644
--- a/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/CodePlacementOpt.cpp
@@ -106,7 +106,7 @@ bool CodePlacementOpt::HasAnalyzableTerminator(MachineBasicBlock *MBB) {
   // At the time of this writing, there are blocks which AnalyzeBranch
   // thinks end in single uncoditional branches, yet which have two CFG
   // successors. Code in this file is not prepared to reason about such things.
-  if (!MBB->empty() && MBB->back().getOpcode() == TargetInstrInfo::EH_LABEL)
+  if (!MBB->empty() && MBB->back().isEHLabel())
     return false;
 
   // Aggressively handle return blocks and similar constructs.
@@ -115,7 +115,7 @@ bool CodePlacementOpt::HasAnalyzableTerminator(MachineBasicBlock *MBB) {
   // Ask the target's AnalyzeBranch if it can handle this block.
   MachineBasicBlock *TBB = 0, *FBB = 0;
   SmallVector<MachineOperand, 4> Cond;
-  // Make the the terminator is understood.
+  // Make sure the terminator is understood.
   if (TII->AnalyzeBranch(*MBB, TBB, FBB, Cond))
     return false;
   // Make sure we have the option of reversing the condition.
diff --git a/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp b/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
index 0982eab..a215a19 100644
--- a/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
@@ -11,6 +11,7 @@
 //
 //===----------------------------------------------------------------------===//
 
+#define DEBUG_TYPE "codegen-dce"
 #include "llvm/CodeGen/Passes.h"
 #include "llvm/Pass.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
@@ -19,8 +20,11 @@
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Target/TargetMachine.h"
+#include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
+STATISTIC(NumDeletes,          "Number of dead instructions deleted");
+
 namespace {
   class DeadMachineInstructionElim : public MachineFunctionPass {
     virtual bool runOnMachineFunction(MachineFunction &MF);
@@ -51,7 +55,7 @@ FunctionPass *llvm::createDeadMachineInstructionElimPass() {
 bool DeadMachineInstructionElim::isDead(const MachineInstr *MI) const {
   // Don't delete instructions with side effects.
   bool SawStore = false;
-  if (!MI->isSafeToMove(TII, SawStore, 0))
+  if (!MI->isSafeToMove(TII, SawStore, 0) && !MI->isPHI())
     return false;
 
   // Examine each operand.
@@ -60,8 +64,8 @@ bool DeadMachineInstructionElim::isDead(const MachineInstr *MI) const {
     if (MO.isReg() && MO.isDef()) {
       unsigned Reg = MO.getReg();
       if (TargetRegisterInfo::isPhysicalRegister(Reg) ?
-          LivePhysRegs[Reg] : !MRI->use_empty(Reg)) {
-        // This def has a use. Don't delete the instruction!
+          LivePhysRegs[Reg] : !MRI->use_nodbg_empty(Reg)) {
+        // This def has a non-debug use. Don't delete the instruction!
         return false;
       }
     }
@@ -110,8 +114,31 @@ bool DeadMachineInstructionElim::runOnMachineFunction(MachineFunction &MF) {
       // If the instruction is dead, delete it!
       if (isDead(MI)) {
         DEBUG(dbgs() << "DeadMachineInstructionElim: DELETING: " << *MI);
+        // It is possible that some DBG_VALUE instructions refer to this
+        // instruction.  Examine each def operand for such references;
+        // if found, mark the DBG_VALUE as undef (but don't delete it).
+        for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
+          const MachineOperand &MO = MI->getOperand(i);
+          if (!MO.isReg() || !MO.isDef())
+            continue;
+          unsigned Reg = MO.getReg();
+          if (!TargetRegisterInfo::isVirtualRegister(Reg))
+            continue;
+          MachineRegisterInfo::use_iterator nextI;
+          for (MachineRegisterInfo::use_iterator I = MRI->use_begin(Reg),
+               E = MRI->use_end(); I!=E; I=nextI) {
+            nextI = llvm::next(I);  // I is invalidated by the setReg
+            MachineOperand& Use = I.getOperand();
+            MachineInstr *UseMI = Use.getParent();
+            if (UseMI==MI)
+              continue;
+            assert(Use.isDebug());
+            UseMI->getOperand(0).setReg(0U);
+          }
+        }
         AnyChanges = true;
         MI->eraseFromParent();
+        ++NumDeletes;
         MIE = MBB->rend();
         // MII is now pointing to the next instruction to process,
         // so don't increment it.
diff --git a/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp b/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
index de45e09..0979c04 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ELFWriter.cpp
@@ -37,7 +37,6 @@
 #include "llvm/PassManager.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/CodeGen/BinaryObject.h"
-#include "llvm/CodeGen/FileWriters.h"
 #include "llvm/CodeGen/MachineCodeEmitter.h"
 #include "llvm/CodeGen/ObjectCodeEmitter.h"
 #include "llvm/CodeGen/MachineCodeEmitter.h"
@@ -59,15 +58,6 @@ using namespace llvm;
 
 char ELFWriter::ID = 0;
 
-/// AddELFWriter - Add the ELF writer to the function pass manager
-ObjectCodeEmitter *llvm::AddELFWriter(PassManagerBase &PM,
-                                      raw_ostream &O,
-                                      TargetMachine &TM) {
-  ELFWriter *EW = new ELFWriter(O, TM);
-  PM.add(EW);
-  return EW->getObjectCodeEmitter();
-}
-
 //===----------------------------------------------------------------------===//
 //                          ELFWriter Implementation
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp b/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
index 266c74c..61959bb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ExactHazardRecognizer.cpp
@@ -7,7 +7,7 @@
 //
 //===----------------------------------------------------------------------===//
 //
-// This implements a a hazard recognizer using the instructions itineraries
+// This implements a hazard recognizer using the instructions itineraries
 // defined for the current target.
 //
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp b/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
index 79b2986..b5006fd 100644
--- a/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/GCStrategy.cpp
@@ -335,7 +335,7 @@ unsigned MachineCodeAnalysis::InsertLabel(MachineBasicBlock &MBB,
   unsigned Label = MMI->NextLabelID();
   
   BuildMI(MBB, MI, DL,
-          TII->get(TargetInstrInfo::GC_LABEL)).addImm(Label);
+          TII->get(TargetOpcode::GC_LABEL)).addImm(Label);
   
   return Label;
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
index 837e184..278de02 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LLVMTargetMachine.cpp
@@ -14,16 +14,20 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/PassManager.h"
 #include "llvm/Pass.h"
+#include "llvm/Analysis/Verifier.h"
 #include "llvm/Assembly/PrintModulePass.h"
 #include "llvm/CodeGen/AsmPrinter.h"
 #include "llvm/CodeGen/Passes.h"
-#include "llvm/CodeGen/FileWriters.h"
 #include "llvm/CodeGen/GCStrategy.h"
 #include "llvm/CodeGen/MachineFunctionAnalysis.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/MC/MCAsmInfo.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCStreamer.h"
+#include "llvm/Target/TargetData.h"
 #include "llvm/Target/TargetRegistry.h"
 #include "llvm/Transforms/Scalar.h"
+#include "llvm/ADT/OwningPtr.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/FormattedStream.h"
@@ -57,14 +61,24 @@ static cl::opt<bool> PrintLSR("print-lsr-output", cl::Hidden,
     cl::desc("Print LLVM IR produced by the loop-reduce pass"));
 static cl::opt<bool> PrintISelInput("print-isel-input", cl::Hidden,
     cl::desc("Print LLVM IR input to isel pass"));
-static cl::opt<bool> PrintEmittedAsm("print-emitted-asm", cl::Hidden,
-    cl::desc("Dump emitter generated instructions as assembly"));
 static cl::opt<bool> PrintGCInfo("print-gc", cl::Hidden,
     cl::desc("Dump garbage collector data"));
 static cl::opt<bool> VerifyMachineCode("verify-machineinstrs", cl::Hidden,
     cl::desc("Verify generated machine code"),
     cl::init(getenv("LLVM_VERIFY_MACHINEINSTRS")!=NULL));
 
+static cl::opt<cl::boolOrDefault>
+AsmVerbose("asm-verbose", cl::desc("Add comments to directives."),
+           cl::init(cl::BOU_UNSET));
+
+static bool getVerboseAsm() {
+  switch (AsmVerbose) {
+  default:
+  case cl::BOU_UNSET: return TargetMachine::getAsmVerbosityDefault();
+  case cl::BOU_TRUE:  return true;
+  case cl::BOU_FALSE: return false;
+  }      
+}
 
 // Enable or disable FastISel. Both options are needed, because
 // FastISel is enabled by default with -fast, and we wish to be
@@ -98,139 +112,81 @@ LLVMTargetMachine::setCodeModelForStatic() {
   setCodeModel(CodeModel::Small);
 }
 
-FileModel::Model
-LLVMTargetMachine::addPassesToEmitFile(PassManagerBase &PM,
-                                       formatted_raw_ostream &Out,
-                                       CodeGenFileType FileType,
-                                       CodeGenOpt::Level OptLevel) {
+bool LLVMTargetMachine::addPassesToEmitFile(PassManagerBase &PM,
+                                            formatted_raw_ostream &Out,
+                                            CodeGenFileType FileType,
+                                            CodeGenOpt::Level OptLevel) {
   // Add common CodeGen passes.
   if (addCommonCodeGenPasses(PM, OptLevel))
-    return FileModel::Error;
+    return true;
 
+  OwningPtr<MCContext> Context(new MCContext());
+  OwningPtr<MCStreamer> AsmStreamer;
+
+  formatted_raw_ostream *LegacyOutput;
   switch (FileType) {
-  default:
+  default: return true;
+  case CGFT_AssemblyFile: {
+    const MCAsmInfo &MAI = *getMCAsmInfo();
+    MCInstPrinter *InstPrinter =
+      getTarget().createMCInstPrinter(MAI.getAssemblerDialect(), MAI, Out);
+    AsmStreamer.reset(createAsmStreamer(*Context, Out, MAI,
+                                        getTargetData()->isLittleEndian(),
+                                        getVerboseAsm(), InstPrinter,
+                                        /*codeemitter*/0));
+    // Set the AsmPrinter's "O" to the output file.
+    LegacyOutput = &Out;
     break;
-  case TargetMachine::AssemblyFile:
-    if (addAssemblyEmitter(PM, OptLevel, getAsmVerbosityDefault(), Out))
-      return FileModel::Error;
-    return FileModel::AsmFile;
-  case TargetMachine::ObjectFile:
-    if (!addObjectFileEmitter(PM, OptLevel, Out))
-      return FileModel::MachOFile;
-    else if (getELFWriterInfo())
-      return FileModel::ElfFile; 
   }
-  return FileModel::Error;
-}
-
-bool LLVMTargetMachine::addAssemblyEmitter(PassManagerBase &PM,
-                                           CodeGenOpt::Level OptLevel,
-                                           bool Verbose,
-                                           formatted_raw_ostream &Out) {
+  case CGFT_ObjectFile: {
+    // Create the code emitter for the target if it exists.  If not, .o file
+    // emission fails.
+    MCCodeEmitter *MCE = getTarget().createCodeEmitter(*this, *Context);
+    if (MCE == 0)
+      return true;
+    
+    AsmStreamer.reset(createMachOStreamer(*Context, Out, MCE));
+    
+    // Any output to the asmprinter's "O" stream is bad and needs to be fixed,
+    // force it to come out stderr.
+    // FIXME: this is horrible and leaks, eventually remove the raw_ostream from
+    // asmprinter.
+    LegacyOutput = new formatted_raw_ostream(errs());
+    break;
+  }
+  case CGFT_Null:
+    // The Null output is intended for use for performance analysis and testing,
+    // not real users.
+    AsmStreamer.reset(createNullStreamer(*Context));
+    // Any output to the asmprinter's "O" stream is bad and needs to be fixed,
+    // force it to come out stderr.
+    // FIXME: this is horrible and leaks, eventually remove the raw_ostream from
+    // asmprinter.
+    LegacyOutput = new formatted_raw_ostream(errs());
+    break;
+  }
+  
+  // Create the AsmPrinter, which takes ownership of Context and AsmStreamer
+  // if successful.
   FunctionPass *Printer =
-    getTarget().createAsmPrinter(Out, *this, getMCAsmInfo(), Verbose);
-  if (!Printer)
-    return true;
-
-  PM.add(Printer);
-  return false;
-}
-
-bool LLVMTargetMachine::addObjectFileEmitter(PassManagerBase &PM,
-                                             CodeGenOpt::Level OptLevel,
-                                             formatted_raw_ostream &Out) {
-  MCCodeEmitter *Emitter = getTarget().createCodeEmitter(*this);
-  if (!Emitter)
+    getTarget().createAsmPrinter(*LegacyOutput, *this, *Context, *AsmStreamer,
+                                 getMCAsmInfo());
+  if (Printer == 0)
     return true;
   
-  PM.add(createMachOWriter(Out, *this, getMCAsmInfo(), Emitter));
-  return false;
-}
-
-/// addPassesToEmitFileFinish - If the passes to emit the specified file had to
-/// be split up (e.g., to add an object writer pass), this method can be used to
-/// finish up adding passes to emit the file, if necessary.
-bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
-                                                  MachineCodeEmitter *MCE,
-                                                  CodeGenOpt::Level OptLevel) {
-  // Make sure the code model is set.
-  setCodeModelForStatic();
+  // If successful, createAsmPrinter took ownership of AsmStreamer and Context.
+  Context.take(); AsmStreamer.take();
   
-  if (MCE)
-    addSimpleCodeEmitter(PM, OptLevel, *MCE);
-  if (PrintEmittedAsm)
-    addAssemblyEmitter(PM, OptLevel, true, ferrs());
-
-  PM.add(createGCInfoDeleter());
-
-  return false; // success!
-}
-
-/// addPassesToEmitFileFinish - If the passes to emit the specified file had to
-/// be split up (e.g., to add an object writer pass), this method can be used to
-/// finish up adding passes to emit the file, if necessary.
-bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
-                                                  JITCodeEmitter *JCE,
-                                                  CodeGenOpt::Level OptLevel) {
-  // Make sure the code model is set.
-  setCodeModelForJIT();
+  PM.add(Printer);
   
-  if (JCE)
-    addSimpleCodeEmitter(PM, OptLevel, *JCE);
-  if (PrintEmittedAsm)
-    addAssemblyEmitter(PM, OptLevel, true, ferrs());
-
-  PM.add(createGCInfoDeleter());
-
-  return false; // success!
-}
-
-/// addPassesToEmitFileFinish - If the passes to emit the specified file had to
-/// be split up (e.g., to add an object writer pass), this method can be used to
-/// finish up adding passes to emit the file, if necessary.
-bool LLVMTargetMachine::addPassesToEmitFileFinish(PassManagerBase &PM,
-                                                  ObjectCodeEmitter *OCE,
-                                                  CodeGenOpt::Level OptLevel) {
   // Make sure the code model is set.
   setCodeModelForStatic();
-  
-  if (OCE)
-    addSimpleCodeEmitter(PM, OptLevel, *OCE);
-  if (PrintEmittedAsm)
-    addAssemblyEmitter(PM, OptLevel, true, ferrs());
-
-  PM.add(createGCInfoDeleter());
-
-  return false; // success!
-}
-
-/// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-/// get machine code emitted.  This uses a MachineCodeEmitter object to handle
-/// actually outputting the machine code and resolving things like the address
-/// of functions.  This method should returns true if machine code emission is
-/// not supported.
-///
-bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
-                                                   MachineCodeEmitter &MCE,
-                                                   CodeGenOpt::Level OptLevel) {
-  // Make sure the code model is set.
-  setCodeModelForJIT();
-  
-  // Add common CodeGen passes.
-  if (addCommonCodeGenPasses(PM, OptLevel))
-    return true;
-
-  addCodeEmitter(PM, OptLevel, MCE);
-  if (PrintEmittedAsm)
-    addAssemblyEmitter(PM, OptLevel, true, ferrs());
-
   PM.add(createGCInfoDeleter());
-
-  return false; // success!
+  return false;
 }
 
 /// addPassesToEmitMachineCode - Add passes to the specified pass manager to
-/// get machine code emitted.  This uses a MachineCodeEmitter object to handle
+/// get machine code emitted.  This uses a JITCodeEmitter object to handle
 /// actually outputting the machine code and resolving things like the address
 /// of functions.  This method should returns true if machine code emission is
 /// not supported.
@@ -246,9 +202,6 @@ bool LLVMTargetMachine::addPassesToEmitMachineCode(PassManagerBase &PM,
     return true;
 
   addCodeEmitter(PM, OptLevel, JCE);
-  if (PrintEmittedAsm)
-    addAssemblyEmitter(PM, OptLevel, true, ferrs());
-
   PM.add(createGCInfoDeleter());
 
   return false; // success!
@@ -282,6 +235,9 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
     PM.add(createLoopStrengthReducePass(getTargetLowering()));
     if (PrintLSR)
       PM.add(createPrintFunctionPass("\n\n*** Code after LSR ***\n", &dbgs()));
+#ifndef NDEBUG
+    PM.add(createVerifierPass());
+#endif
   }
 
   // Turn exception handling constructs into something the code generators can
@@ -339,6 +295,16 @@ bool LLVMTargetMachine::addCommonCodeGenPasses(PassManagerBase &PM,
   printAndVerify(PM, "After Instruction Selection",
                  /* allowDoubleDefs= */ true);
 
+  // Optimize PHIs before DCE: removing dead PHI cycles may make more
+  // instructions dead.
+  if (OptLevel != CodeGenOpt::None)
+    PM.add(createOptimizePHIsPass());
+
+  // Delete dead machine instructions regardless of optimization level.
+  PM.add(createDeadMachineInstructionElimPass());
+  printAndVerify(PM, "After codegen DCE pass",
+                 /* allowDoubleDefs= */ true);
+
   if (OptLevel != CodeGenOpt::None) {
     PM.add(createOptimizeExtsPass());
     if (!DisableMachineLICM)
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
index 8746bf9..f6bf433 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
@@ -140,7 +140,7 @@ void LiveIntervals::printInstrs(raw_ostream &OS) const {
        << ":\t\t# derived from " << mbbi->getName() << "\n";
     for (MachineBasicBlock::iterator mii = mbbi->begin(),
            mie = mbbi->end(); mii != mie; ++mii) {
-      if (mii->getOpcode()==TargetInstrInfo::DEBUG_VALUE)
+      if (mii->isDebugValue())
         OS << SlotIndex::getEmptyKey() << '\t' << *mii;
       else
         OS << getInstructionIndex(mii) << '\t' << *mii;
@@ -288,9 +288,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
     VNInfo *ValNo;
     MachineInstr *CopyMI = NULL;
     unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-    if (mi->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG ||
-        mi->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-        mi->getOpcode() == TargetInstrInfo::SUBREG_TO_REG ||
+    if (mi->isExtractSubreg() || mi->isInsertSubreg() || mi->isSubregToReg() ||
         tii_->isMoveInstr(*mi, SrcReg, DstReg, SrcSubReg, DstSubReg))
       CopyMI = mi;
     // Earlyclobbers move back one.
@@ -460,9 +458,7 @@ void LiveIntervals::handleVirtualRegisterDef(MachineBasicBlock *mbb,
       VNInfo *ValNo;
       MachineInstr *CopyMI = NULL;
       unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-      if (mi->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG ||
-          mi->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-          mi->getOpcode() == TargetInstrInfo::SUBREG_TO_REG ||
+      if (mi->isExtractSubreg() || mi->isInsertSubreg() || mi->isSubregToReg()||
           tii_->isMoveInstr(*mi, SrcReg, DstReg, SrcSubReg, DstSubReg))
         CopyMI = mi;
       ValNo = interval.getNextValue(defIndex, CopyMI, true, VNInfoAllocator);
@@ -516,6 +512,8 @@ void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
   baseIndex = baseIndex.getNextIndex();
   while (++mi != MBB->end()) {
 
+    if (mi->isDebugValue())
+      continue;
     if (getInstructionFromIndex(baseIndex) == 0)
       baseIndex = indexes_->getNextNonNullIndex(baseIndex);
 
@@ -531,8 +529,8 @@ void LiveIntervals::handlePhysicalRegisterDef(MachineBasicBlock *MBB,
           end = baseIndex.getDefIndex();
         } else {
           // Another instruction redefines the register before it is ever read.
-          // Then the register is essentially dead at the instruction that defines
-          // it. Hence its interval is:
+          // Then the register is essentially dead at the instruction that
+          // defines it. Hence its interval is:
           // [defSlot(def), defSlot(def)+1)
           DEBUG(dbgs() << " dead");
           end = start.getStoreIndex();
@@ -577,9 +575,7 @@ void LiveIntervals::handleRegisterDef(MachineBasicBlock *MBB,
   else if (allocatableRegs_[MO.getReg()]) {
     MachineInstr *CopyMI = NULL;
     unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
-    if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG ||
-        MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-        MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG ||
+    if (MI->isExtractSubreg() || MI->isInsertSubreg() || MI->isSubregToReg() ||
         tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubReg, DstSubReg))
       CopyMI = MI;
     handlePhysicalRegisterDef(MBB, MI, MIIdx, MO,
@@ -612,8 +608,16 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
 
   SlotIndex end = baseIndex;
   bool SeenDefUse = false;
-  
-  while (mi != MBB->end()) {
+
+  MachineBasicBlock::iterator E = MBB->end();  
+  while (mi != E) {
+    if (mi->isDebugValue()) {
+      ++mi;
+      if (mi != E && !mi->isDebugValue()) {
+        baseIndex = indexes_->getNextNonNullIndex(baseIndex);
+      }
+      continue;
+    }
     if (mi->killsRegister(interval.reg, tri_)) {
       DEBUG(dbgs() << " killed");
       end = baseIndex.getDefIndex();
@@ -631,7 +635,7 @@ void LiveIntervals::handleLiveInRegister(MachineBasicBlock *MBB,
     }
 
     ++mi;
-    if (mi != MBB->end()) {
+    if (mi != E && !mi->isDebugValue()) {
       baseIndex = indexes_->getNextNonNullIndex(baseIndex);
     }
   }
@@ -671,6 +675,9 @@ void LiveIntervals::computeIntervals() {
   for (MachineFunction::iterator MBBI = mf_->begin(), E = mf_->end();
        MBBI != E; ++MBBI) {
     MachineBasicBlock *MBB = MBBI;
+    if (MBB->empty())
+      continue;
+
     // Track the index of the current machine instr.
     SlotIndex MIIndex = getMBBStartIdx(MBB);
     DEBUG(dbgs() << MBB->getName() << ":\n");
@@ -693,7 +700,7 @@ void LiveIntervals::computeIntervals() {
     for (MachineBasicBlock::iterator MI = MBB->begin(), miEnd = MBB->end();
          MI != miEnd; ++MI) {
       DEBUG(dbgs() << MIIndex << "\t" << *MI);
-      if (MI->getOpcode()==TargetInstrInfo::DEBUG_VALUE)
+      if (MI->isDebugValue())
         continue;
 
       // Handle defs.
@@ -742,7 +749,7 @@ unsigned LiveIntervals::getVNInfoSourceReg(const VNInfo *VNI) const {
   if (!VNI->getCopy())
     return 0;
 
-  if (VNI->getCopy()->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+  if (VNI->getCopy()->isExtractSubreg()) {
     // If it's extracting out of a physical register, return the sub-register.
     unsigned Reg = VNI->getCopy()->getOperand(1).getReg();
     if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
@@ -756,8 +763,8 @@ unsigned LiveIntervals::getVNInfoSourceReg(const VNInfo *VNI) const {
       Reg = tri_->getSubReg(Reg, VNI->getCopy()->getOperand(2).getImm());
     }
     return Reg;
-  } else if (VNI->getCopy()->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-             VNI->getCopy()->getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
+  } else if (VNI->getCopy()->isInsertSubreg() ||
+             VNI->getCopy()->isSubregToReg())
     return VNI->getCopy()->getOperand(2).getReg();
 
   unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
@@ -919,7 +926,7 @@ bool LiveIntervals::tryFoldMemoryOperand(MachineInstr* &MI,
                                          SmallVector<unsigned, 2> &Ops,
                                          bool isSS, int Slot, unsigned Reg) {
   // If it is an implicit def instruction, just delete it.
-  if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+  if (MI->isImplicitDef()) {
     RemoveMachineInstrFromMaps(MI);
     vrm.RemoveMachineInstrFromMaps(MI);
     MI->eraseFromParent();
@@ -1059,7 +1066,7 @@ rewriteInstructionForSpills(const LiveInterval &li, const VNInfo *VNI,
       // If this is the rematerializable definition MI itself and
       // all of its uses are rematerialized, simply delete it.
       if (MI == ReMatOrigDefMI && CanDelete) {
-        DEBUG(dbgs() << "\t\t\t\tErasing re-materlizable def: "
+        DEBUG(dbgs() << "\t\t\t\tErasing re-materializable def: "
                      << MI << '\n');
         RemoveMachineInstrFromMaps(MI);
         vrm.RemoveMachineInstrFromMaps(MI);
@@ -1302,6 +1309,12 @@ rewriteInstructionsForSpills(const LiveInterval &li, bool TrySplit,
     MachineInstr *MI = &*ri;
     MachineOperand &O = ri.getOperand();
     ++ri;
+    if (MI->isDebugValue()) {
+      // Remove debug info for now.
+      O.setReg(0U);
+      DEBUG(dbgs() << "Removing debug info due to spill:" << "\t" << *MI);
+      continue;
+    }
     assert(!O.isImplicit() && "Spilling register that's used as implicit use?");
     SlotIndex index = getInstructionIndex(MI);
     if (index < start || index >= end)
@@ -1525,7 +1538,7 @@ LiveIntervals::handleSpilledImpDefs(const LiveInterval &li, VirtRegMap &vrm,
     MachineInstr *MI = &*ri;
     ++ri;
     if (O.isDef()) {
-      assert(MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF &&
+      assert(MI->isImplicitDef() &&
              "Register def was not rewritten?");
       RemoveMachineInstrFromMaps(MI);
       vrm.RemoveMachineInstrFromMaps(MI);
@@ -2056,7 +2069,7 @@ bool LiveIntervals::spillPhysRegAroundRegDefsUses(const LiveInterval &li,
         std::string msg;
         raw_string_ostream Msg(msg);
         Msg << "Ran out of registers during register allocation!";
-        if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {
+        if (MI->isInlineAsm()) {
           Msg << "\nPlease check your inline asm statement for invalid "
               << "constraints:\n";
           MI->print(Msg, tm_);
diff --git a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
index b44a220..8a124dc 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LiveVariables.cpp
@@ -543,6 +543,8 @@ bool LiveVariables::runOnMachineFunction(MachineFunction &mf) {
     for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
          I != E; ++I) {
       MachineInstr *MI = I;
+      if (MI->isDebugValue())
+        continue;
       DistanceMap.insert(std::make_pair(MI, Dist++));
 
       // Process all of the operands of the instruction...
@@ -550,7 +552,7 @@ bool LiveVariables::runOnMachineFunction(MachineFunction &mf) {
 
       // Unless it is a PHI node.  In this case, ONLY process the DEF, not any
       // of the uses.  They will be handled in other basic blocks.
-      if (MI->getOpcode() == TargetInstrInfo::PHI)
+      if (MI->isPHI())
         NumOperandsToProcess = 1;
 
       SmallVector<unsigned, 4> UseRegs;
@@ -692,7 +694,7 @@ void LiveVariables::analyzePHINodes(const MachineFunction& Fn) {
   for (MachineFunction::const_iterator I = Fn.begin(), E = Fn.end();
        I != E; ++I)
     for (MachineBasicBlock::const_iterator BBI = I->begin(), BBE = I->end();
-         BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI)
+         BBI != BBE && BBI->isPHI(); ++BBI)
       for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
         PHIVarInfo[BBI->getOperand(i + 1).getMBB()->getNumber()]
           .push_back(BBI->getOperand(i).getReg());
@@ -771,8 +773,7 @@ void LiveVariables::addNewBlock(MachineBasicBlock *BB,
 
   // All registers used by PHI nodes in SuccBB must be live through BB.
   for (MachineBasicBlock::const_iterator BBI = SuccBB->begin(),
-         BBE = SuccBB->end();
-       BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI)
+         BBE = SuccBB->end(); BBI != BBE && BBI->isPHI(); ++BBI)
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
       if (BBI->getOperand(i+1).getMBB() == BB)
         getVarInfo(BBI->getOperand(i).getReg()).AliveBlocks.set(NumNew);
diff --git a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
index 1121d9b..b4ef648 100644
--- a/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/LowerSubregs.cpp
@@ -129,7 +129,7 @@ bool LowerSubregsInstructionPass::LowerExtract(MachineInstr *MI) {
     if (MI->getOperand(1).isKill()) {
       // We must make sure the super-register gets killed. Replace the
       // instruction with KILL.
-      MI->setDesc(TII->get(TargetInstrInfo::KILL));
+      MI->setDesc(TII->get(TargetOpcode::KILL));
       MI->RemoveOperand(2);     // SubIdx
       DEBUG(dbgs() << "subreg: replace by: " << *MI);
       return true;
@@ -242,7 +242,7 @@ bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
     // <undef>, we need to make sure it is alive by inserting a KILL
     if (MI->getOperand(1).isUndef() && !MI->getOperand(0).isDead()) {
       MachineInstrBuilder MIB = BuildMI(*MBB, MI, MI->getDebugLoc(),
-                                TII->get(TargetInstrInfo::KILL), DstReg);
+                                TII->get(TargetOpcode::KILL), DstReg);
       if (MI->getOperand(2).isUndef())
         MIB.addReg(InsReg, RegState::Undef);
       else
@@ -260,7 +260,7 @@ bool LowerSubregsInstructionPass::LowerInsert(MachineInstr *MI) {
       // If the source register being inserted is undef, then this becomes a
       // KILL.
       BuildMI(*MBB, MI, MI->getDebugLoc(),
-              TII->get(TargetInstrInfo::KILL), DstSubReg);
+              TII->get(TargetOpcode::KILL), DstSubReg);
     else {
       bool Emitted = TII->copyRegToReg(*MBB, MI, DstSubReg, InsReg, TRC0, TRC1);
       (void)Emitted;
@@ -314,11 +314,11 @@ bool LowerSubregsInstructionPass::runOnMachineFunction(MachineFunction &MF) {
          mi != me;) {
       MachineBasicBlock::iterator nmi = llvm::next(mi);
       MachineInstr *MI = mi;
-      if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+      if (MI->isExtractSubreg()) {
         MadeChange |= LowerExtract(MI);
-      } else if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+      } else if (MI->isInsertSubreg()) {
         MadeChange |= LowerInsert(MI);
-      } else if (MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
+      } else if (MI->isSubregToReg()) {
         MadeChange |= LowerSubregToReg(MI);
       }
       mi = nmi;
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachOWriter.cpp b/libclamav/c++/llvm/lib/CodeGen/MachOWriter.cpp
deleted file mode 100644
index e8bbe21..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/MachOWriter.cpp
+++ /dev/null
@@ -1,125 +0,0 @@
-//===-- MachOWriter.cpp - Target-independent Mach-O Writer code -----------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements the target-independent Mach-O writer.  This file writes
-// out the Mach-O file in the following order:
-//
-//  #1 FatHeader (universal-only)
-//  #2 FatArch (universal-only, 1 per universal arch)
-//  Per arch:
-//    #3 Header
-//    #4 Load Commands
-//    #5 Sections
-//    #6 Relocations
-//    #7 Symbols
-//    #8 Strings
-//
-//===----------------------------------------------------------------------===//
-
-#include "MachOWriter.h"
-#include "llvm/Function.h"
-#include "llvm/CodeGen/FileWriters.h"
-#include "llvm/CodeGen/MachineFunction.h"
-#include "llvm/MC/MCAsmInfo.h"
-#include "llvm/MC/MCContext.h"
-#include "llvm/MC/MCCodeEmitter.h"
-#include "llvm/MC/MCInst.h"
-#include "llvm/MC/MCStreamer.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/FormattedStream.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Target/Mangler.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Target/TargetLowering.h"
-#include "llvm/Target/TargetLoweringObjectFile.h"
-using namespace llvm;
-
-namespace llvm { 
-MachineFunctionPass *createMachOWriter(formatted_raw_ostream &O,
-                                       TargetMachine &TM,
-                                       const MCAsmInfo *T, 
-                                       MCCodeEmitter *MCE) { 
-  return new MachOWriter(O, TM, T, MCE);
-}
-}
-
-//===----------------------------------------------------------------------===//
-//                          MachOWriter Implementation
-//===----------------------------------------------------------------------===//
-
-char MachOWriter::ID = 0;
-
-MachOWriter::MachOWriter(formatted_raw_ostream &o, TargetMachine &tm,
-                         const MCAsmInfo *T, MCCodeEmitter *MCE)
-  : MachineFunctionPass(&ID), O(o), TM(tm), MAI(T), MCCE(MCE),
-    OutContext(*new MCContext()),
-    OutStreamer(*createMachOStreamer(OutContext, O, MCCE)) { 
-}
-
-MachOWriter::~MachOWriter() {
-  delete &OutStreamer;
-  delete &OutContext;
-  delete MCCE;
-}
-
-bool MachOWriter::doInitialization(Module &M) {
-  // Initialize TargetLoweringObjectFile.
-  TM.getTargetLowering()->getObjFileLowering().Initialize(OutContext, TM);
-
-  return false;
-}
-
-/// doFinalization - Now that the module has been completely processed, emit
-/// the Mach-O file to 'O'.
-bool MachOWriter::doFinalization(Module &M) {
-  OutStreamer.Finish();
-  return false;
-}
-
-bool MachOWriter::runOnMachineFunction(MachineFunction &MF) {
-  const Function *F = MF.getFunction();
-  TargetLoweringObjectFile &TLOF = TM.getTargetLowering()->getObjFileLowering();
-  const MCSection *S = TLOF.SectionForGlobal(F, Mang, TM);
-  OutStreamer.SwitchSection(S);
-
-  for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
-       I != E; ++I) {
-    // Print a label for the basic block.
-    for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
-         II != IE; ++II) {
-      const MachineInstr *MI = II;
-      MCInst OutMI;
-      OutMI.setOpcode(MI->getOpcode());
-
-      for (unsigned i = 0, e = MI->getNumOperands(); i != e; ++i) {
-        const MachineOperand &MO = MI->getOperand(i);
-        MCOperand MCOp;
-
-        switch (MO.getType()) {
-          default:
-            MI->dump();
-            llvm_unreachable("unknown operand type");
-          case MachineOperand::MO_Register:
-            // Ignore all implicit register operands.
-            if (MO.isImplicit()) continue;
-            MCOp = MCOperand::CreateReg(MO.getReg());
-            break;
-          case MachineOperand::MO_Immediate:
-            MCOp = MCOperand::CreateImm(MO.getImm());
-            break;
-        }
-        OutMI.addOperand(MCOp);
-      }
-      
-      OutStreamer.EmitInstruction(OutMI);
-    }
-  }
-
-  return false;
-}
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachOWriter.h b/libclamav/c++/llvm/lib/CodeGen/MachOWriter.h
deleted file mode 100644
index 2e7e67d..0000000
--- a/libclamav/c++/llvm/lib/CodeGen/MachOWriter.h
+++ /dev/null
@@ -1,88 +0,0 @@
-//=== MachOWriter.h - Target-independent Mach-O writer support --*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the MachOWriter class.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef MACHOWRITER_H
-#define MACHOWRITER_H
-
-#include "llvm/CodeGen/MachineFunctionPass.h"
-#include "llvm/Target/TargetMachine.h"
-
-namespace llvm {
-  class GlobalVariable;
-  class Mangler;
-  class MCCodeEmitter;
-  class MCContext;
-  class MCStreamer;
-  
-  /// MachOWriter - This class implements the common target-independent code for
-  /// writing Mach-O files.  Targets should derive a class from this to
-  /// parameterize the output format.
-  ///
-  class MachOWriter : public MachineFunctionPass {
-    static char ID;
-
-  protected:
-    /// Output stream to send the resultant object file to.
-    ///
-    formatted_raw_ostream &O;
-
-    /// Target machine description.
-    ///
-    TargetMachine &TM;
-
-    /// Target Asm Printer information.
-    ///
-    const MCAsmInfo *MAI;
-    
-    /// MCCE - The MCCodeEmitter object that we are exposing to emit machine
-    /// code for functions to the .o file.
-    MCCodeEmitter *MCCE;
-    
-    /// OutContext - This is the context for the output file that we are
-    /// streaming.  This owns all of the global MC-related objects for the
-    /// generated translation unit.
-    MCContext &OutContext;
-    
-    /// OutStreamer - This is the MCStreamer object for the file we are
-    /// generating.  This contains the transient state for the current
-    /// translation unit that we are generating (such as the current section
-    /// etc).
-    MCStreamer &OutStreamer;
-    
-    /// Name-mangler for global names.
-    ///
-    Mangler *Mang;
-    
-    /// doInitialization - Emit the file header and all of the global variables
-    /// for the module to the Mach-O file.
-    bool doInitialization(Module &M);
-
-    /// doFinalization - Now that the module has been completely processed, emit
-    /// the Mach-O file to 'O'.
-    bool doFinalization(Module &M);
-
-    bool runOnMachineFunction(MachineFunction &MF);
-    
-  public:
-    explicit MachOWriter(formatted_raw_ostream &O, TargetMachine &TM,
-                         const MCAsmInfo *T, MCCodeEmitter *MCE);
-    
-    virtual ~MachOWriter();
-    
-    virtual const char *getPassName() const {
-      return "Mach-O Writer";
-    }
-  };
-}
-
-#endif
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
index 9c318a5..655a0bf 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -540,7 +540,7 @@ bool MachineBasicBlock::CorrectExtraCFGEdges(MachineBasicBlock *DestA,
 }
 
 /// findDebugLoc - find the next valid DebugLoc starting at MBBI, skipping
-/// any DEBUG_VALUE instructions.  Return UnknownLoc if there is none.
+/// any DBG_VALUE instructions.  Return UnknownLoc if there is none.
 DebugLoc
 MachineBasicBlock::findDebugLoc(MachineBasicBlock::iterator &MBBI) {
   DebugLoc DL;
@@ -548,8 +548,7 @@ MachineBasicBlock::findDebugLoc(MachineBasicBlock::iterator &MBBI) {
   if (MBBI != E) {
     // Skip debug declarations, we don't want a DebugLoc from them.
     MachineBasicBlock::iterator MBBI2 = MBBI;
-    while (MBBI2 != E &&
-           MBBI2->getOpcode()==TargetInstrInfo::DEBUG_VALUE)
+    while (MBBI2 != E && MBBI2->isDebugValue())
       MBBI2++;
     if (MBBI2 != E)
       DL = MBBI2->getDebugLoc();
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
index 511f4ae..f141c56 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineFunction.cpp
@@ -187,7 +187,7 @@ MachineFunction::CreateMachineInstr(const TargetInstrDesc &TID,
 }
 
 /// CloneMachineInstr - Create a new MachineInstr which is a copy of the
-/// 'Orig' instruction, identical in all ways except the the instruction
+/// 'Orig' instruction, identical in all ways except the instruction
 /// has no parent, prev, or next.
 ///
 MachineInstr *
@@ -453,8 +453,7 @@ MCSymbol *MachineFunction::getJTISymbol(unsigned JTI, MCContext &Ctx,
                                         bool isLinkerPrivate) const {
   assert(JumpTableInfo && "No jump tables");
   
-  const std::vector<MachineJumpTableEntry> &JTs =JumpTableInfo->getJumpTables();
-  assert(JTI < JTs.size() && "Invalid JTI!");
+  assert(JTI < JumpTableInfo->getJumpTables().size() && "Invalid JTI!");
   const MCAsmInfo &MAI = *getTarget().getMCAsmInfo();
   
   const char *Prefix = isLinkerPrivate ? MAI.getLinkerPrivateGlobalPrefix() :
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
index ef2fcee..df61c74 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineInstr.cpp
@@ -127,7 +127,8 @@ void MachineOperand::ChangeToImmediate(int64_t ImmVal) {
 /// the specified value.  If an operand is known to be an register already,
 /// the setReg method should be used.
 void MachineOperand::ChangeToRegister(unsigned Reg, bool isDef, bool isImp,
-                                      bool isKill, bool isDead, bool isUndef) {
+                                      bool isKill, bool isDead, bool isUndef,
+                                      bool isDebug) {
   // If this operand is already a register operand, use setReg to update the 
   // register's use/def lists.
   if (isReg()) {
@@ -152,6 +153,7 @@ void MachineOperand::ChangeToRegister(unsigned Reg, bool isDef, bool isImp,
   IsDead = isDead;
   IsUndef = isUndef;
   IsEarlyClobber = false;
+  IsDebug = isDebug;
   SubReg = 0;
 }
 
@@ -740,20 +742,6 @@ unsigned MachineInstr::getNumExplicitOperands() const {
 }
 
 
-/// isLabel - Returns true if the MachineInstr represents a label.
-///
-bool MachineInstr::isLabel() const {
-  return getOpcode() == TargetInstrInfo::DBG_LABEL ||
-         getOpcode() == TargetInstrInfo::EH_LABEL ||
-         getOpcode() == TargetInstrInfo::GC_LABEL;
-}
-
-/// isDebugLabel - Returns true if the MachineInstr represents a debug label.
-///
-bool MachineInstr::isDebugLabel() const {
-  return getOpcode() == TargetInstrInfo::DBG_LABEL;
-}
-
 /// findRegisterUseOperandIdx() - Returns the MachineOperand that is a use of
 /// the specific register or -1 if it is not found. It further tightens
 /// the search criteria to a use that kills the register if isKill is true.
@@ -819,7 +807,7 @@ int MachineInstr::findFirstPredOperandIdx() const {
 /// first tied use operand index by reference is UseOpIdx is not null.
 bool MachineInstr::
 isRegTiedToUseOperand(unsigned DefOpIdx, unsigned *UseOpIdx) const {
-  if (getOpcode() == TargetInstrInfo::INLINEASM) {
+  if (isInlineAsm()) {
     assert(DefOpIdx >= 2);
     const MachineOperand &MO = getOperand(DefOpIdx);
     if (!MO.isReg() || !MO.isDef() || MO.getReg() == 0)
@@ -878,7 +866,7 @@ isRegTiedToUseOperand(unsigned DefOpIdx, unsigned *UseOpIdx) const {
 /// operand index by reference.
 bool MachineInstr::
 isRegTiedToDefOperand(unsigned UseOpIdx, unsigned *DefOpIdx) const {
-  if (getOpcode() == TargetInstrInfo::INLINEASM) {
+  if (isInlineAsm()) {
     const MachineOperand &MO = getOperand(UseOpIdx);
     if (!MO.isReg() || !MO.isUse() || MO.getReg() == 0)
       return false;
@@ -1046,7 +1034,7 @@ bool MachineInstr::hasVolatileMemoryRef() const {
 
 /// isInvariantLoad - Return true if this instruction is loading from a
 /// location whose value is invariant across the function.  For example,
-/// loading a value from the constant pool or from from the argument area
+/// loading a value from the constant pool or from the argument area
 /// of a function if it does not change.  This should only return true of
 /// *all* loads the instruction does are invariant (if it does multiple loads).
 bool MachineInstr::isInvariantLoad(AliasAnalysis *AA) const {
@@ -1088,7 +1076,7 @@ bool MachineInstr::isInvariantLoad(AliasAnalysis *AA) const {
 /// merges together the same virtual register, return the register, otherwise
 /// return 0.
 unsigned MachineInstr::isConstantValuePHI() const {
-  if (getOpcode() != TargetInstrInfo::PHI)
+  if (!isPHI())
     return 0;
   assert(getNumOperands() >= 3 &&
          "It's illegal to have a PHI without source operands");
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
index ffcc8ab..92c84f3 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineLICM.cpp
@@ -336,7 +336,7 @@ static bool HasPHIUses(unsigned Reg, MachineRegisterInfo *RegInfo) {
   for (MachineRegisterInfo::use_iterator UI = RegInfo->use_begin(Reg),
          UE = RegInfo->use_end(); UI != UE; ++UI) {
     MachineInstr *UseMI = &*UI;
-    if (UseMI->getOpcode() == TargetInstrInfo::PHI)
+    if (UseMI->isPHI())
       return true;
   }
   return false;
@@ -363,7 +363,7 @@ bool MachineLICM::isLoadFromConstantMemory(MachineInstr *MI) {
 /// IsProfitableToHoist - Return true if it is potentially profitable to hoist
 /// the given loop invariant.
 bool MachineLICM::IsProfitableToHoist(MachineInstr &MI) {
-  if (MI.getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
+  if (MI.isImplicitDef())
     return false;
 
   // FIXME: For now, only hoist re-materilizable instructions. LICM will
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
index ed5bb5e..5052af7 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfo.cpp
@@ -40,6 +40,7 @@ MachineModuleInfoImpl::~MachineModuleInfoImpl() {}
 MachineModuleInfo::MachineModuleInfo()
 : ImmutablePass(&ID)
 , ObjFileMMI(0)
+, CurCallSite(0)
 , CallsEHReturn(0)
 , CallsUnwindInit(0)
 , DbgInfoAvailable(false) {
@@ -71,6 +72,7 @@ void MachineModuleInfo::EndFunction() {
 
   // Clean up exception info.
   LandingPads.clear();
+  CallSiteMap.clear();
   TypeInfos.clear();
   FilterIds.clear();
   FilterEnds.clear();
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfoImpls.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfoImpls.cpp
index 7a62929..8378906 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfoImpls.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineModuleInfoImpls.cpp
@@ -26,17 +26,17 @@ void MachineModuleInfoMachO::Anchor() {}
 
 static int SortSymbolPair(const void *LHS, const void *RHS) {
   const MCSymbol *LHSS =
-    ((const std::pair<const MCSymbol*, const MCSymbol*>*)LHS)->first;
+    ((const std::pair<MCSymbol*, MCSymbol*>*)LHS)->first;
   const MCSymbol *RHSS =
-    ((const std::pair<const MCSymbol*, const MCSymbol*>*)RHS)->first;
+    ((const std::pair<MCSymbol*, MCSymbol*>*)RHS)->first;
   return LHSS->getName().compare(RHSS->getName());
 }
 
 /// GetSortedStubs - Return the entries from a DenseMap in a deterministic
 /// sorted orer.
 MachineModuleInfoMachO::SymbolListTy
-MachineModuleInfoMachO::GetSortedStubs(const DenseMap<const MCSymbol*, 
-                                                      const MCSymbol*> &Map) {
+MachineModuleInfoMachO::GetSortedStubs(const DenseMap<MCSymbol*, 
+                                                      MCSymbol*> &Map) {
   MachineModuleInfoMachO::SymbolListTy List(Map.begin(), Map.end());
   if (!List.empty())
     qsort(&List[0], List.size(), sizeof(List[0]), SortSymbolPair);
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp
index 467ea5d..2255dc3 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineSSAUpdater.cpp
@@ -20,6 +20,7 @@
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetRegisterInfo.h"
 #include "llvm/ADT/DenseMap.h"
+#include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
@@ -92,13 +93,13 @@ unsigned LookForIdenticalPHI(MachineBasicBlock *BB,
     return 0;
 
   MachineBasicBlock::iterator I = BB->front();
-  if (I->getOpcode() != TargetInstrInfo::PHI)
+  if (!I->isPHI())
     return 0;
 
   AvailableValsTy AVals;
   for (unsigned i = 0, e = PredValues.size(); i != e; ++i)
     AVals[PredValues[i].first] = PredValues[i].second;
-  while (I != BB->end() && I->getOpcode() == TargetInstrInfo::PHI) {
+  while (I != BB->end() && I->isPHI()) {
     bool Same = true;
     for (unsigned i = 1, e = I->getNumOperands(); i != e; i += 2) {
       unsigned SrcReg = I->getOperand(i).getReg();
@@ -155,7 +156,7 @@ unsigned MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB) {
   // If there are no predecessors, just return undef.
   if (BB->pred_empty()) {
     // Insert an implicit_def to represent an undef value.
-    MachineInstr *NewDef = InsertNewDef(TargetInstrInfo::IMPLICIT_DEF,
+    MachineInstr *NewDef = InsertNewDef(TargetOpcode::IMPLICIT_DEF,
                                         BB, BB->getFirstTerminator(),
                                         VRC, MRI, TII);
     return NewDef->getOperand(0).getReg();
@@ -192,7 +193,7 @@ unsigned MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB) {
 
   // Otherwise, we do need a PHI: insert one now.
   MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
-  MachineInstr *InsertedPHI = InsertNewDef(TargetInstrInfo::PHI, BB,
+  MachineInstr *InsertedPHI = InsertNewDef(TargetOpcode::PHI, BB,
                                            Loc, VRC, MRI, TII);
 
   // Fill in all the predecessors of the PHI.
@@ -231,7 +232,7 @@ MachineBasicBlock *findCorrespondingPred(const MachineInstr *MI,
 void MachineSSAUpdater::RewriteUse(MachineOperand &U) {
   MachineInstr *UseMI = U.getParent();
   unsigned NewVR = 0;
-  if (UseMI->getOpcode() == TargetInstrInfo::PHI) {
+  if (UseMI->isPHI()) {
     MachineBasicBlock *SourceBB = findCorrespondingPred(UseMI, &U);
     NewVR = GetValueAtEndOfBlockInternal(SourceBB);
   } else {
@@ -277,7 +278,7 @@ unsigned MachineSSAUpdater::GetValueAtEndOfBlockInternal(MachineBasicBlock *BB){
     // it.  When we get back to the first instance of the recursion we will fill
     // in the PHI node.
     MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
-    MachineInstr *NewPHI = InsertNewDef(TargetInstrInfo::PHI, BB, Loc,
+    MachineInstr *NewPHI = InsertNewDef(TargetOpcode::PHI, BB, Loc,
                                         VRC, MRI,TII);
     unsigned NewVR = NewPHI->getOperand(0).getReg();
     InsertRes.first->second = NewVR;
@@ -289,7 +290,7 @@ unsigned MachineSSAUpdater::GetValueAtEndOfBlockInternal(MachineBasicBlock *BB){
   // be invalidated.
   if (BB->pred_empty()) {
     // Insert an implicit_def to represent an undef value.
-    MachineInstr *NewDef = InsertNewDef(TargetInstrInfo::IMPLICIT_DEF,
+    MachineInstr *NewDef = InsertNewDef(TargetOpcode::IMPLICIT_DEF,
                                         BB, BB->getFirstTerminator(),
                                         VRC, MRI, TII);
     return InsertRes.first->second = NewDef->getOperand(0).getReg();
@@ -358,7 +359,7 @@ unsigned MachineSSAUpdater::GetValueAtEndOfBlockInternal(MachineBasicBlock *BB){
   MachineInstr *InsertedPHI;
   if (InsertedVal == 0) {
     MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->front();
-    InsertedPHI = InsertNewDef(TargetInstrInfo::PHI, BB, Loc,
+    InsertedPHI = InsertNewDef(TargetOpcode::PHI, BB, Loc,
                                VRC, MRI, TII);
     InsertedVal = InsertedPHI->getOperand(0).getReg();
   } else {
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
index c177e3c..c391576 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineSink.cpp
@@ -77,7 +77,7 @@ bool MachineSinking::AllUsesDominatedByBlock(unsigned Reg,
     // Determine the block of the use.
     MachineInstr *UseInst = &*I;
     MachineBasicBlock *UseBlock = UseInst->getParent();
-    if (UseInst->getOpcode() == TargetInstrInfo::PHI) {
+    if (UseInst->isPHI()) {
       // PHI nodes use the operand in the predecessor block, not the block with
       // the PHI.
       UseBlock = UseInst->getOperand(I.getOperandNo()+1).getMBB();
@@ -269,8 +269,7 @@ bool MachineSinking::SinkInstruction(MachineInstr *MI, bool &SawStore) {
   
   // Determine where to insert into.  Skip phi nodes.
   MachineBasicBlock::iterator InsertPos = SuccToSinkTo->begin();
-  while (InsertPos != SuccToSinkTo->end() && 
-         InsertPos->getOpcode() == TargetInstrInfo::PHI)
+  while (InsertPos != SuccToSinkTo->end() && InsertPos->isPHI())
     ++InsertPos;
   
   // Move the instruction.
diff --git a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
index 584c21b..434a1e8 100644
--- a/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -590,7 +590,7 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
           // must be live in. PHI instructions are handled separately.
           if (MInfo.regsKilled.count(Reg))
             report("Using a killed virtual register", MO, MONum);
-          else if (MI->getOpcode() != TargetInstrInfo::PHI)
+          else if (!MI->isPHI())
             MInfo.vregsLiveIn.insert(std::make_pair(Reg, MI));
         }
       }
@@ -650,10 +650,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
   }
 
   case MachineOperand::MO_MachineBasicBlock:
-    if (MI->getOpcode() == TargetInstrInfo::PHI) {
-      if (!MO->getMBB()->isSuccessor(MI->getParent()))
-        report("PHI operand is not in the CFG", MO, MONum);
-    }
+    if (MI->isPHI() && !MO->getMBB()->isSuccessor(MI->getParent()))
+      report("PHI operand is not in the CFG", MO, MONum);
     break;
 
   default:
@@ -783,7 +781,7 @@ void MachineVerifier::calcRegsRequired() {
 // calcRegsPassed has been run so BBInfo::isLiveOut is valid.
 void MachineVerifier::checkPHIOps(const MachineBasicBlock *MBB) {
   for (MachineBasicBlock::const_iterator BBI = MBB->begin(), BBE = MBB->end();
-       BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI) {
+       BBI != BBE && BBI->isPHI(); ++BBI) {
     DenseSet<const MachineBasicBlock*> seen;
 
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/OptimizeExts.cpp b/libclamav/c++/llvm/lib/CodeGen/OptimizeExts.cpp
index 096f9d4..acb6869 100644
--- a/libclamav/c++/llvm/lib/CodeGen/OptimizeExts.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/OptimizeExts.cpp
@@ -110,7 +110,7 @@ bool OptimizeExts::OptimizeInstr(MachineInstr *MI, MachineBasicBlock *MBB,
       MachineInstr *UseMI = &*UI;
       if (UseMI == MI)
         continue;
-      if (UseMI->getOpcode() == TargetInstrInfo::PHI) {
+      if (UseMI->isPHI()) {
         ExtendLife = false;
         continue;
       }
@@ -150,7 +150,7 @@ bool OptimizeExts::OptimizeInstr(MachineInstr *MI, MachineBasicBlock *MBB,
       UI = MRI->use_begin(DstReg);
       for (MachineRegisterInfo::use_iterator UE = MRI->use_end(); UI != UE;
            ++UI)
-        if (UI->getOpcode() == TargetInstrInfo::PHI)
+        if (UI->isPHI())
           PHIBBs.insert(UI->getParent());
 
       const TargetRegisterClass *RC = MRI->getRegClass(SrcReg);
@@ -162,7 +162,7 @@ bool OptimizeExts::OptimizeInstr(MachineInstr *MI, MachineBasicBlock *MBB,
           continue;
         unsigned NewVR = MRI->createVirtualRegister(RC);
         BuildMI(*UseMBB, UseMI, UseMI->getDebugLoc(),
-                TII->get(TargetInstrInfo::EXTRACT_SUBREG), NewVR)
+                TII->get(TargetOpcode::EXTRACT_SUBREG), NewVR)
           .addReg(DstReg).addImm(SubIdx);
         UseMO->setReg(NewVR);
         ++NumReuse;
diff --git a/libclamav/c++/llvm/lib/CodeGen/OptimizePHIs.cpp b/libclamav/c++/llvm/lib/CodeGen/OptimizePHIs.cpp
new file mode 100644
index 0000000..2717d4d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/CodeGen/OptimizePHIs.cpp
@@ -0,0 +1,189 @@
+//===-- OptimizePHIs.cpp - Optimize machine instruction PHIs --------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This pass optimizes machine instruction PHIs to take advantage of
+// opportunities created during DAG legalization.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "phi-opt"
+#include "llvm/CodeGen/Passes.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineInstr.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/Function.h"
+#include "llvm/ADT/SmallPtrSet.h"
+#include "llvm/ADT/Statistic.h"
+using namespace llvm;
+
+STATISTIC(NumPHICycles, "Number of PHI cycles replaced");
+STATISTIC(NumDeadPHICycles, "Number of dead PHI cycles");
+
+namespace {
+  class OptimizePHIs : public MachineFunctionPass {
+    MachineRegisterInfo *MRI;
+    const TargetInstrInfo *TII;
+
+  public:
+    static char ID; // Pass identification
+    OptimizePHIs() : MachineFunctionPass(&ID) {}
+
+    virtual bool runOnMachineFunction(MachineFunction &MF);
+
+    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.setPreservesCFG();
+      MachineFunctionPass::getAnalysisUsage(AU);
+    }
+
+  private:
+    typedef SmallPtrSet<MachineInstr*, 16> InstrSet;
+    typedef SmallPtrSetIterator<MachineInstr*> InstrSetIterator;
+
+    bool IsSingleValuePHICycle(MachineInstr *MI, unsigned &SingleValReg,
+                               InstrSet &PHIsInCycle);
+    bool IsDeadPHICycle(MachineInstr *MI, InstrSet &PHIsInCycle);
+    bool OptimizeBB(MachineBasicBlock &MBB);
+  };
+}
+
+char OptimizePHIs::ID = 0;
+static RegisterPass<OptimizePHIs>
+X("opt-phis", "Optimize machine instruction PHIs");
+
+FunctionPass *llvm::createOptimizePHIsPass() { return new OptimizePHIs(); }
+
+bool OptimizePHIs::runOnMachineFunction(MachineFunction &Fn) {
+  MRI = &Fn.getRegInfo();
+  TII = Fn.getTarget().getInstrInfo();
+
+  // Find dead PHI cycles and PHI cycles that can be replaced by a single
+  // value.  InstCombine does these optimizations, but DAG legalization may
+  // introduce new opportunities, e.g., when i64 values are split up for
+  // 32-bit targets.
+  bool Changed = false;
+  for (MachineFunction::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I)
+    Changed |= OptimizeBB(*I);
+
+  return Changed;
+}
+
+/// IsSingleValuePHICycle - Check if MI is a PHI where all the source operands
+/// are copies of SingleValReg, possibly via copies through other PHIs.  If
+/// SingleValReg is zero on entry, it is set to the register with the single
+/// non-copy value.  PHIsInCycle is a set used to keep track of the PHIs that
+/// have been scanned.
+bool OptimizePHIs::IsSingleValuePHICycle(MachineInstr *MI,
+                                         unsigned &SingleValReg,
+                                         InstrSet &PHIsInCycle) {
+  assert(MI->isPHI() && "IsSingleValuePHICycle expects a PHI instruction");
+  unsigned DstReg = MI->getOperand(0).getReg();
+
+  // See if we already saw this register.
+  if (!PHIsInCycle.insert(MI))
+    return true;
+
+  // Don't scan crazily complex things.
+  if (PHIsInCycle.size() == 16)
+    return false;
+
+  // Scan the PHI operands.
+  for (unsigned i = 1; i != MI->getNumOperands(); i += 2) {
+    unsigned SrcReg = MI->getOperand(i).getReg();
+    if (SrcReg == DstReg)
+      continue;
+    MachineInstr *SrcMI = MRI->getVRegDef(SrcReg);
+
+    // Skip over register-to-register moves.
+    unsigned MvSrcReg, MvDstReg, SrcSubIdx, DstSubIdx;
+    if (SrcMI &&
+        TII->isMoveInstr(*SrcMI, MvSrcReg, MvDstReg, SrcSubIdx, DstSubIdx) &&
+        SrcSubIdx == 0 && DstSubIdx == 0 &&
+        TargetRegisterInfo::isVirtualRegister(MvSrcReg))
+      SrcMI = MRI->getVRegDef(MvSrcReg);
+    if (!SrcMI)
+      return false;
+
+    if (SrcMI->isPHI()) {
+      if (!IsSingleValuePHICycle(SrcMI, SingleValReg, PHIsInCycle))
+        return false;
+    } else {
+      // Fail if there is more than one non-phi/non-move register.
+      if (SingleValReg != 0)
+        return false;
+      SingleValReg = SrcReg;
+    }
+  }
+  return true;
+}
+
+/// IsDeadPHICycle - Check if the register defined by a PHI is only used by
+/// other PHIs in a cycle.
+bool OptimizePHIs::IsDeadPHICycle(MachineInstr *MI, InstrSet &PHIsInCycle) {
+  assert(MI->isPHI() && "IsDeadPHICycle expects a PHI instruction");
+  unsigned DstReg = MI->getOperand(0).getReg();
+  assert(TargetRegisterInfo::isVirtualRegister(DstReg) &&
+         "PHI destination is not a virtual register");
+
+  // See if we already saw this register.
+  if (!PHIsInCycle.insert(MI))
+    return true;
+
+  // Don't scan crazily complex things.
+  if (PHIsInCycle.size() == 16)
+    return false;
+
+  for (MachineRegisterInfo::use_iterator I = MRI->use_begin(DstReg),
+         E = MRI->use_end(); I != E; ++I) {
+    MachineInstr *UseMI = &*I;
+    if (!UseMI->isPHI() || !IsDeadPHICycle(UseMI, PHIsInCycle))
+      return false;
+  }
+
+  return true;
+}
+
+/// OptimizeBB - Remove dead PHI cycles and PHI cycles that can be replaced by
+/// a single value.
+bool OptimizePHIs::OptimizeBB(MachineBasicBlock &MBB) {
+  bool Changed = false;
+  for (MachineBasicBlock::iterator
+         MII = MBB.begin(), E = MBB.end(); MII != E; ) {
+    MachineInstr *MI = &*MII++;
+    if (!MI->isPHI())
+      break;
+
+    // Check for single-value PHI cycles.
+    unsigned SingleValReg = 0;
+    InstrSet PHIsInCycle;
+    if (IsSingleValuePHICycle(MI, SingleValReg, PHIsInCycle) &&
+        SingleValReg != 0) {
+      MRI->replaceRegWith(MI->getOperand(0).getReg(), SingleValReg);
+      MI->eraseFromParent();
+      ++NumPHICycles;
+      Changed = true;
+      continue;
+    }
+
+    // Check for dead PHI cycles.
+    PHIsInCycle.clear();
+    if (IsDeadPHICycle(MI, PHIsInCycle)) {
+      for (InstrSetIterator PI = PHIsInCycle.begin(), PE = PHIsInCycle.end();
+           PI != PE; ++PI) {
+        MachineInstr *PhiMI = *PI;
+        if (&*MII == PhiMI)
+          ++MII;
+        PhiMI->eraseFromParent();
+      }
+      ++NumDeadPHICycles;
+      Changed = true;
+    }
+  }
+  return Changed;
+}
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/Graph.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/Graph.h
index 40fc919..b2224cb 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/Graph.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/Graph.h
@@ -19,6 +19,7 @@
 
 #include <list>
 #include <vector>
+#include <map>
 
 namespace PBQP {
 
@@ -37,7 +38,10 @@ namespace PBQP {
   public:
 
     typedef NodeList::iterator NodeItr;
+    typedef NodeList::const_iterator ConstNodeItr;
+
     typedef EdgeList::iterator EdgeItr;
+    typedef EdgeList::const_iterator ConstEdgeItr;
 
   private:
 
@@ -58,6 +62,7 @@ namespace PBQP {
     public:
       NodeEntry(const Vector &costs) : costs(costs), degree(0) {}
       Vector& getCosts() { return costs; }
+      const Vector& getCosts() const { return costs; }
       unsigned getDegree() const { return degree; }
       AdjEdgeItr edgesBegin() { return adjEdges.begin(); }
       AdjEdgeItr edgesEnd() { return adjEdges.end(); }
@@ -85,6 +90,7 @@ namespace PBQP {
       NodeItr getNode1() const { return node1; }
       NodeItr getNode2() const { return node2; }
       Matrix& getCosts() { return costs; }
+      const Matrix& getCosts() const { return costs; }
       void setNode1AEItr(AdjEdgeItr ae) { node1AEItr = ae; }
       AdjEdgeItr getNode1AEItr() { return node1AEItr; }
       void setNode2AEItr(AdjEdgeItr ae) { node2AEItr = ae; }
@@ -104,9 +110,10 @@ namespace PBQP {
     // ----- INTERNAL METHODS -----
 
     NodeEntry& getNode(NodeItr nItr) { return *nItr; }
-    const NodeEntry& getNode(NodeItr nItr) const { return *nItr; }
+    const NodeEntry& getNode(ConstNodeItr nItr) const { return *nItr; }
+
     EdgeEntry& getEdge(EdgeItr eItr) { return *eItr; }
-    const EdgeEntry& getEdge(EdgeItr eItr) const { return *eItr; }
+    const EdgeEntry& getEdge(ConstEdgeItr eItr) const { return *eItr; }
 
     NodeItr addConstructedNode(const NodeEntry &n) {
       ++numNodes;
@@ -130,10 +137,32 @@ namespace PBQP {
       return edgeItr;
     }
 
+    inline void copyFrom(const Graph &other);
   public:
 
+    /// \brief Construct an empty PBQP graph.
     Graph() : numNodes(0), numEdges(0) {}
 
+    /// \brief Copy construct this graph from "other". Note: Does not copy node
+    ///        and edge data, only graph structure and costs.
+    /// @param other Source graph to copy from.
+    Graph(const Graph &other) : numNodes(0), numEdges(0) {
+      copyFrom(other);
+    }
+
+    /// \brief Make this graph a copy of "other". Note: Does not copy node and
+    ///        edge data, only graph structure and costs.
+    /// @param other The graph to copy from.
+    /// @return A reference to this graph.
+    ///
+    /// This will clear the current graph, erasing any nodes and edges added,
+    /// before copying from other.
+    Graph& operator=(const Graph &other) {
+      clear();      
+      copyFrom(other);
+      return *this;
+    }
+
     /// \brief Add a node with the given costs.
     /// @param costs Cost vector for the new node.
     /// @return Node iterator for the added node.
@@ -166,6 +195,13 @@ namespace PBQP {
     /// @return Node cost vector.
     Vector& getNodeCosts(NodeItr nItr) { return getNode(nItr).getCosts(); }
 
+    /// \brief Get a node's cost vector (const version).
+    /// @param nItr Node iterator.
+    /// @return Node cost vector.
+    const Vector& getNodeCosts(ConstNodeItr nItr) const {
+      return getNode(nItr).getCosts();
+    }
+
     /// \brief Set a node's data pointer.
     /// @param nItr Node iterator.
     /// @param data Pointer to node data.
@@ -183,6 +219,13 @@ namespace PBQP {
     /// @return Edge cost matrix.
     Matrix& getEdgeCosts(EdgeItr eItr) { return getEdge(eItr).getCosts(); }
 
+    /// \brief Get an edge's cost matrix (const version).
+    /// @param eItr Edge iterator.
+    /// @return Edge cost matrix.
+    const Matrix& getEdgeCosts(ConstEdgeItr eItr) const {
+      return getEdge(eItr).getCosts();
+    }
+
     /// \brief Set an edge's data pointer.
     /// @param eItr Edge iterator.
     /// @param data Pointer to edge data.
@@ -205,9 +248,15 @@ namespace PBQP {
     /// \brief Begin iterator for node set.
     NodeItr nodesBegin() { return nodes.begin(); }
 
+    /// \brief Begin const iterator for node set.
+    ConstNodeItr nodesBegin() const { return nodes.begin(); }
+
     /// \brief End iterator for node set.
     NodeItr nodesEnd() { return nodes.end(); }
 
+    /// \brief End const iterator for node set.
+    ConstNodeItr nodesEnd() const { return nodes.end(); }
+
     /// \brief Begin iterator for edge set.
     EdgeItr edgesBegin() { return edges.begin(); }
 
@@ -342,6 +391,10 @@ namespace PBQP {
     bool operator()(Graph::NodeItr n1, Graph::NodeItr n2) const {
       return &*n1 < &*n2;
     }
+
+    bool operator()(Graph::ConstNodeItr n1, Graph::ConstNodeItr n2) const {
+      return &*n1 < &*n2;
+    }
   };
 
   class EdgeItrCompartor {
@@ -349,8 +402,23 @@ namespace PBQP {
     bool operator()(Graph::EdgeItr e1, Graph::EdgeItr e2) const {
       return &*e1 < &*e2;
     }
+
+    bool operator()(Graph::ConstEdgeItr e1, Graph::ConstEdgeItr e2) const {
+      return &*e1 < &*e2;
+    }
   };
 
+  void Graph::copyFrom(const Graph &other) {
+    std::map<Graph::ConstNodeItr, Graph::NodeItr,
+             NodeItrComparator> nodeMap;
+
+     for (Graph::ConstNodeItr nItr = other.nodesBegin(),
+                             nEnd = other.nodesEnd();
+         nItr != nEnd; ++nItr) {
+      nodeMap[nItr] = addNode(other.getNodeCosts(nItr));
+    }
+      
+  }
 
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
index 5066685..c156264 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/HeuristicSolver.h
@@ -9,7 +9,7 @@
 //
 // Heuristic PBQP solver. This solver is able to perform optimal reductions for
 // nodes of degree 0, 1 or 2. For nodes of degree >2 a plugable heuristic is
-// used to to select a node for reduction. 
+// used to select a node for reduction. 
 //
 //===----------------------------------------------------------------------===//
 
@@ -18,7 +18,6 @@
 
 #include "Graph.h"
 #include "Solution.h"
-#include "llvm/Support/raw_ostream.h"
 #include <vector>
 #include <limits>
 
@@ -107,8 +106,11 @@ namespace PBQP {
     Solution s;
     std::vector<Graph::NodeItr> stack;
 
-    std::vector<NodeData> nodeData;
-    std::vector<EdgeData> edgeData;
+    typedef std::list<NodeData> NodeDataList;
+    NodeDataList nodeDataList;
+
+    typedef std::list<EdgeData> EdgeDataList;
+    EdgeDataList edgeDataList;
 
   public:
 
@@ -364,8 +366,8 @@ namespace PBQP {
       } else if (addedEdge) {
         // If the edge was added, and non-null, finish setting it up, add it to
         // the solver & notify heuristic.
-        edgeData.push_back(EdgeData());
-        g.setEdgeData(yzeItr, &edgeData.back());
+        edgeDataList.push_back(EdgeData());
+        g.setEdgeData(yzeItr, &edgeDataList.back());
         addSolverEdge(yzeItr);
         h.handleAddEdge(yzeItr);
       }
@@ -402,22 +404,18 @@ namespace PBQP {
         simplify();
       }
 
-      // Reserve space for the node and edge data.
-      nodeData.resize(g.getNumNodes());
-      edgeData.resize(g.getNumEdges());
-
       // Create node data objects.
-      unsigned ndIndex = 0;     
       for (Graph::NodeItr nItr = g.nodesBegin(), nEnd = g.nodesEnd();
-	       nItr != nEnd; ++nItr, ++ndIndex) {
-        g.setNodeData(nItr, &nodeData[ndIndex]);
+	       nItr != nEnd; ++nItr) {
+        nodeDataList.push_back(NodeData());
+        g.setNodeData(nItr, &nodeDataList.back());
       }
 
       // Create edge data objects.
-      unsigned edIndex = 0;
       for (Graph::EdgeItr eItr = g.edgesBegin(), eEnd = g.edgesEnd();
-           eItr != eEnd; ++eItr, ++edIndex) {
-        g.setEdgeData(eItr, &edgeData[edIndex]);
+           eItr != eEnd; ++eItr) {
+        edgeDataList.push_back(EdgeData());
+        g.setEdgeData(eItr, &edgeDataList.back());
         addSolverEdge(eItr);
       }
     }
@@ -495,14 +493,23 @@ namespace PBQP {
 
     bool tryNormaliseEdgeMatrix(Graph::EdgeItr &eItr) {
 
+      const PBQPNum infinity = std::numeric_limits<PBQPNum>::infinity();
+
       Matrix &edgeCosts = g.getEdgeCosts(eItr);
       Vector &uCosts = g.getNodeCosts(g.getEdgeNode1(eItr)),
              &vCosts = g.getNodeCosts(g.getEdgeNode2(eItr));
 
       for (unsigned r = 0; r < edgeCosts.getRows(); ++r) {
-        PBQPNum rowMin = edgeCosts.getRowMin(r);
+        PBQPNum rowMin = infinity;
+
+        for (unsigned c = 0; c < edgeCosts.getCols(); ++c) {
+          if (vCosts[c] != infinity && edgeCosts[r][c] < rowMin)
+            rowMin = edgeCosts[r][c];
+        }
+
         uCosts[r] += rowMin;
-        if (rowMin != std::numeric_limits<PBQPNum>::infinity()) {
+
+        if (rowMin != infinity) {
           edgeCosts.subFromRow(r, rowMin);
         }
         else {
@@ -511,9 +518,16 @@ namespace PBQP {
       }
 
       for (unsigned c = 0; c < edgeCosts.getCols(); ++c) {
-        PBQPNum colMin = edgeCosts.getColMin(c);
+        PBQPNum colMin = infinity;
+
+        for (unsigned r = 0; r < edgeCosts.getRows(); ++r) {
+          if (uCosts[r] != infinity && edgeCosts[r][c] < colMin)
+            colMin = edgeCosts[r][c];
+        }
+
         vCosts[c] += colMin;
-        if (colMin != std::numeric_limits<PBQPNum>::infinity()) {
+
+        if (colMin != infinity) {
           edgeCosts.subFromCol(c, colMin);
         }
         else {
@@ -563,8 +577,8 @@ namespace PBQP {
 
     void cleanup() {
       h.cleanup();
-      nodeData.clear();
-      edgeData.clear();
+      nodeDataList.clear();
+      edgeDataList.clear();
     }
   };
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PBQP/Heuristics/Briggs.h b/libclamav/c++/llvm/lib/CodeGen/PBQP/Heuristics/Briggs.h
index 65f22cb..30d34d9 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PBQP/Heuristics/Briggs.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PBQP/Heuristics/Briggs.h
@@ -18,6 +18,7 @@
 #ifndef LLVM_CODEGEN_PBQP_HEURISTICS_BRIGGS_H
 #define LLVM_CODEGEN_PBQP_HEURISTICS_BRIGGS_H
 
+#include "llvm/Support/Compiler.h"
 #include "../HeuristicSolver.h"
 #include "../HeuristicBase.h"
 
@@ -127,14 +128,7 @@ namespace PBQP {
       /// selected for heuristic reduction instead.
       bool shouldOptimallyReduce(Graph::NodeItr nItr) {
         if (getSolver().getSolverDegree(nItr) < 3) {
-          if (getGraph().getNodeCosts(nItr)[0] !=
-                std::numeric_limits<PBQPNum>::infinity()) {
-            return true;
-          }
-          // Otherwise we have an infinite spill cost node.
-          initializeNode(nItr);
-          NodeData &nd = getHeuristicNodeData(nItr);
-          return nd.isAllocable;
+          return true;
         }
         // else
         return false;
@@ -273,7 +267,7 @@ namespace PBQP {
         if (!nd.isHeuristic)
           return;
 
-        EdgeData &ed = getHeuristicEdgeData(eItr);
+        EdgeData &ed ATTRIBUTE_UNUSED = getHeuristicEdgeData(eItr);
 
         assert(ed.isUpToDate && "Edge data is not up to date.");
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
index 365df30..b740c68 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.cpp
@@ -21,6 +21,7 @@
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/Target/TargetInstrInfo.h"
 #include "llvm/Function.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/ADT/SmallPtrSet.h"
@@ -95,14 +96,14 @@ bool llvm::PHIElimination::runOnMachineFunction(MachineFunction &Fn) {
 ///
 bool llvm::PHIElimination::EliminatePHINodes(MachineFunction &MF,
                                              MachineBasicBlock &MBB) {
-  if (MBB.empty() || MBB.front().getOpcode() != TargetInstrInfo::PHI)
+  if (MBB.empty() || !MBB.front().isPHI())
     return false;   // Quick exit for basic blocks without PHIs.
 
   // Get an iterator to the first instruction after the last PHI node (this may
   // also be the end of the basic block).
   MachineBasicBlock::iterator AfterPHIsIt = SkipPHIsAndLabels(MBB, MBB.begin());
 
-  while (MBB.front().getOpcode() == TargetInstrInfo::PHI)
+  while (MBB.front().isPHI())
     LowerAtomicPHINode(MBB, AfterPHIsIt);
 
   return true;
@@ -115,7 +116,7 @@ static bool isSourceDefinedByImplicitDef(const MachineInstr *MPhi,
   for (unsigned i = 1; i != MPhi->getNumOperands(); i += 2) {
     unsigned SrcReg = MPhi->getOperand(i).getReg();
     const MachineInstr *DefMI = MRI->getVRegDef(SrcReg);
-    if (!DefMI || DefMI->getOpcode() != TargetInstrInfo::IMPLICIT_DEF)
+    if (!DefMI || !DefMI->isImplicitDef())
       return false;
   }
   return true;
@@ -197,7 +198,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     // If all sources of a PHI node are implicit_def, just emit an
     // implicit_def instead of a copy.
     BuildMI(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
-            TII->get(TargetInstrInfo::IMPLICIT_DEF), DestReg);
+            TII->get(TargetOpcode::IMPLICIT_DEF), DestReg);
   else {
     // Can we reuse an earlier PHI node? This only happens for critical edges,
     // typically those created by tail duplication.
@@ -281,7 +282,7 @@ void llvm::PHIElimination::LowerAtomicPHINode(
     // If source is defined by an implicit def, there is no need to insert a
     // copy.
     MachineInstr *DefMI = MRI->getVRegDef(SrcReg);
-    if (DefMI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+    if (DefMI->isImplicitDef()) {
       ImpDefs.insert(DefMI);
       continue;
     }
@@ -375,7 +376,7 @@ void llvm::PHIElimination::analyzePHINodes(const MachineFunction& Fn) {
   for (MachineFunction::const_iterator I = Fn.begin(), E = Fn.end();
        I != E; ++I)
     for (MachineBasicBlock::const_iterator BBI = I->begin(), BBE = I->end();
-         BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI)
+         BBI != BBE && BBI->isPHI(); ++BBI)
       for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
         ++VRegPHIUseCount[BBVRegPair(BBI->getOperand(i+1).getMBB()->getNumber(),
                                      BBI->getOperand(i).getReg())];
@@ -384,12 +385,11 @@ void llvm::PHIElimination::analyzePHINodes(const MachineFunction& Fn) {
 bool llvm::PHIElimination::SplitPHIEdges(MachineFunction &MF,
                                          MachineBasicBlock &MBB,
                                          LiveVariables &LV) {
-  if (MBB.empty() || MBB.front().getOpcode() != TargetInstrInfo::PHI ||
-      MBB.isLandingPad())
+  if (MBB.empty() || !MBB.front().isPHI() || MBB.isLandingPad())
     return false;   // Quick exit for basic blocks without PHIs.
 
   for (MachineBasicBlock::const_iterator BBI = MBB.begin(), BBE = MBB.end();
-       BBI != BBE && BBI->getOpcode() == TargetInstrInfo::PHI; ++BBI) {
+       BBI != BBE && BBI->isPHI(); ++BBI) {
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2) {
       unsigned Reg = BBI->getOperand(i).getReg();
       MachineBasicBlock *PreMBB = BBI->getOperand(i+1).getMBB();
@@ -438,7 +438,7 @@ MachineBasicBlock *PHIElimination::SplitCriticalEdge(MachineBasicBlock *A,
 
   // Fix PHI nodes in B so they refer to NMBB instead of A
   for (MachineBasicBlock::iterator i = B->begin(), e = B->end();
-       i != e && i->getOpcode() == TargetInstrInfo::PHI; ++i)
+       i != e && i->isPHI(); ++i)
     for (unsigned ni = 1, ne = i->getNumOperands(); ni != ne; ni += 2)
       if (i->getOperand(ni+1).getMBB() == A)
         i->getOperand(ni+1).setMBB(NMBB);
diff --git a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
index 1bcc9dc..895aaa4 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
+++ b/libclamav/c++/llvm/lib/CodeGen/PHIElimination.h
@@ -14,10 +14,11 @@
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
-#include "llvm/Target/TargetInstrInfo.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
 
 namespace llvm {
-
+  class LiveVariables;
+  
   /// Lower PHI instructions to copies.  
   class PHIElimination : public MachineFunctionPass {
     MachineRegisterInfo  *MRI; // Machine register information
@@ -112,8 +113,7 @@ namespace llvm {
                                                 MachineBasicBlock::iterator I) {
       // Rather than assuming that EH labels come before other kinds of labels,
       // just skip all labels.
-      while (I != MBB.end() &&
-             (I->getOpcode() == TargetInstrInfo::PHI || I->isLabel()))
+      while (I != MBB.end() && (I->isPHI() || I->isLabel()))
         ++I;
       return I;
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
index 8cbc8c2..70e91aa 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PreAllocSplitting.cpp
@@ -686,8 +686,7 @@ void PreAllocSplitting::ReconstructLiveInterval(LiveInterval* LI) {
     SlotIndex DefIdx = LIs->getInstructionIndex(&*DI);
     DefIdx = DefIdx.getDefIndex();
     
-    assert(DI->getOpcode() != TargetInstrInfo::PHI &&
-           "PHI instr in code during pre-alloc splitting.");
+    assert(!DI->isPHI() && "PHI instr in code during pre-alloc splitting.");
     VNInfo* NewVN = LI->getNextValue(DefIdx, 0, true, Alloc);
     
     // If the def is a move, set the copy field.
diff --git a/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp b/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
index a00f450..e3df2e4 100644
--- a/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
@@ -49,9 +49,9 @@ bool ProcessImplicitDefs::CanTurnIntoImplicitDef(MachineInstr *MI,
       Reg == SrcReg)
     return true;
 
-  if (OpIdx == 2 && MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
+  if (OpIdx == 2 && MI->isSubregToReg())
     return true;
-  if (OpIdx == 1 && MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG)
+  if (OpIdx == 1 && MI->isExtractSubreg())
     return true;
   return false;
 }
@@ -88,7 +88,7 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
          I != E; ) {
       MachineInstr *MI = &*I;
       ++I;
-      if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+      if (MI->isImplicitDef()) {
         unsigned Reg = MI->getOperand(0).getReg();
         ImpDefRegs.insert(Reg);
         if (TargetRegisterInfo::isPhysicalRegister(Reg)) {
@@ -99,7 +99,7 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
         continue;
       }
 
-      if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+      if (MI->isInsertSubreg()) {
         MachineOperand &MO = MI->getOperand(2);
         if (ImpDefRegs.count(MO.getReg())) {
           // %reg1032<def> = INSERT_SUBREG %reg1032, undef, 2
@@ -127,7 +127,7 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
         // Use is a copy, just turn it into an implicit_def.
         if (CanTurnIntoImplicitDef(MI, Reg, i, tii_)) {
           bool isKill = MO.isKill();
-          MI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
+          MI->setDesc(tii_->get(TargetOpcode::IMPLICIT_DEF));
           for (int j = MI->getNumOperands() - 1, ee = 0; j > ee; --j)
             MI->RemoveOperand(j);
           if (isKill) {
@@ -187,7 +187,7 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
       for (MachineRegisterInfo::def_iterator DI = mri_->def_begin(Reg),
              DE = mri_->def_end(); DI != DE; ++DI) {
         MachineInstr *DeadImpDef = &*DI;
-        if (DeadImpDef->getOpcode() != TargetInstrInfo::IMPLICIT_DEF) {
+        if (!DeadImpDef->isImplicitDef()) {
           Skip = true;
           break;
         }
@@ -220,7 +220,7 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
         unsigned SrcReg, DstReg, SrcSubReg, DstSubReg;
         if (tii_->isMoveInstr(*RMI, SrcReg, DstReg, SrcSubReg, DstSubReg) &&
             Reg == SrcReg) {
-          RMI->setDesc(tii_->get(TargetInstrInfo::IMPLICIT_DEF));
+          RMI->setDesc(tii_->get(TargetOpcode::IMPLICIT_DEF));
 
           bool isKill = false;
           SmallVector<unsigned, 4> Ops;
@@ -264,8 +264,8 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &fn) {
         }
       }
       RUses.clear();
+      ModInsts.clear();
     }
-    ModInsts.clear();
     ImpDefRegs.clear();
     ImpDefMIs.clear();
   }
diff --git a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
index 709d46a..040259e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/PrologEpilogInserter.cpp
@@ -161,7 +161,7 @@ void PEI::calculateCallsInformation(MachineFunction &Fn) {
         if (Size > MaxCallFrameSize) MaxCallFrameSize = Size;
         HasCalls = true;
         FrameSDOps.push_back(I);
-      } else if (I->getOpcode() == TargetInstrInfo::INLINEASM) {
+      } else if (I->isInlineAsm()) {
         // An InlineAsm might be a call; assume it is to get the stack frame
         // aligned correctly for calls.
         HasCalls = true;
@@ -476,8 +476,6 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
   // Loop over all of the stack objects, assigning sequential addresses...
   MachineFrameInfo *FFI = Fn.getFrameInfo();
 
-  unsigned MaxAlign = 1;
-
   // Start at the beginning of the local area.
   // The Offset is the distance from the stack top in the direction
   // of stack growth -- so it's always nonnegative.
@@ -517,9 +515,6 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
       Offset += FFI->getObjectSize(i);
 
       unsigned Align = FFI->getObjectAlignment(i);
-      // If the alignment of this object is greater than that of the stack,
-      // then increase the stack alignment to match.
-      MaxAlign = std::max(MaxAlign, Align);
       // Adjust to alignment boundary
       Offset = (Offset+Align-1)/Align*Align;
 
@@ -529,9 +524,6 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
     int MaxCSFI = MaxCSFrameIndex, MinCSFI = MinCSFrameIndex;
     for (int i = MaxCSFI; i >= MinCSFI ; --i) {
       unsigned Align = FFI->getObjectAlignment(i);
-      // If the alignment of this object is greater than that of the stack,
-      // then increase the stack alignment to match.
-      MaxAlign = std::max(MaxAlign, Align);
       // Adjust to alignment boundary
       Offset = (Offset+Align-1)/Align*Align;
 
@@ -540,6 +532,8 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
     }
   }
 
+  unsigned MaxAlign = FFI->getMaxAlignment();
+
   // Make sure the special register scavenging spill slot is closest to the
   // frame pointer if a frame pointer is required.
   const TargetRegisterInfo *RegInfo = Fn.getTarget().getRegisterInfo();
@@ -605,11 +599,6 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &Fn) {
 
   // Update frame info to pretend that this is part of the stack...
   FFI->setStackSize(Offset - LocalAreaOffset);
-
-  // Remember the required stack alignment in case targets need it to perform
-  // dynamic stack alignment.
-  if (MaxAlign > FFI->getMaxAlignment())
-    FFI->setMaxAlignment(MaxAlign);
 }
 
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
index cbb5826..c25c24a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocLocal.cpp
@@ -492,7 +492,8 @@ MachineInstr *RALocal::reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
   if (unsigned PR = getVirt2PhysRegMapSlot(VirtReg)) {
     MarkPhysRegRecentlyUsed(PR);       // Already have this value available!
     MI->getOperand(OpNum).setReg(PR);  // Assign the input register
-    getVirtRegLastUse(VirtReg) = std::make_pair(MI, OpNum);
+    if (!MI->isDebugValue())
+      getVirtRegLastUse(VirtReg) = std::make_pair(MI, OpNum);
     return MI;
   }
 
@@ -531,7 +532,7 @@ MachineInstr *RALocal::reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
     std::string msg;
     raw_string_ostream Msg(msg);
     Msg << "Ran out of registers during register allocation!";
-    if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {
+    if (MI->isInlineAsm()) {
       Msg << "\nPlease check your inline asm statement for invalid "
            << "constraints:\n";
       MI->print(Msg, TM);
@@ -544,7 +545,7 @@ MachineInstr *RALocal::reloadVirtReg(MachineBasicBlock &MBB, MachineInstr *MI,
       std::string msg;
       raw_string_ostream Msg(msg);
       Msg << "Ran out of registers during register allocation!";
-      if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {
+      if (MI->isInlineAsm()) {
         Msg << "\nPlease check your inline asm statement for invalid "
              << "constraints:\n";
         MI->print(Msg, TM);
@@ -609,6 +610,8 @@ void RALocal::ComputeLocalLiveness(MachineBasicBlock& MBB) {
   DenseMap<unsigned, std::pair<MachineInstr*, unsigned> > LastUseDef;
   for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end();
        I != E; ++I) {
+    if (I->isDebugValue())
+      continue;
     for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
       MachineOperand& MO = I->getOperand(i);
       // Uses don't trigger any flags, but we need to save
@@ -764,8 +767,11 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
     // Determine whether this is a copy instruction.  The cases where the
     // source or destination are phys regs are handled specially.
     unsigned SrcCopyReg, DstCopyReg, SrcCopySubReg, DstCopySubReg;
+    unsigned SrcCopyPhysReg = 0U;
     bool isCopy = TII->isMoveInstr(*MI, SrcCopyReg, DstCopyReg, 
                                    SrcCopySubReg, DstCopySubReg);
+    if (isCopy && TargetRegisterInfo::isVirtualRegister(SrcCopyReg))
+      SrcCopyPhysReg = getVirt2PhysRegMapSlot(SrcCopyReg);
 
     // Loop over the implicit uses, making sure that they are at the head of the
     // use order list, so they don't get reallocated.
@@ -793,7 +799,7 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
     // have in them, then mark them unallocatable.
     // If any virtual regs are earlyclobber, allocate them now (before
     // freeing inputs that are killed).
-    if (MI->getOpcode()==TargetInstrInfo::INLINEASM) {
+    if (MI->isInlineAsm()) {
       for (unsigned i = 0; i != MI->getNumOperands(); ++i) {
         MachineOperand& MO = MI->getOperand(i);
         if (MO.isReg() && MO.isDef() && MO.isEarlyClobber() &&
@@ -838,6 +844,18 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
       }
     }
 
+    // If a DBG_VALUE says something is located in a spilled register,
+    // change the DBG_VALUE to be undef, which prevents the register
+    // from being reloaded here.  Doing that would change the generated
+    // code, unless another use immediately follows this instruction.
+    if (MI->isDebugValue() &&
+        MI->getNumOperands()==3 && MI->getOperand(0).isReg()) {
+      unsigned VirtReg = MI->getOperand(0).getReg();
+      if (VirtReg && TargetRegisterInfo::isVirtualRegister(VirtReg) &&
+          !getVirt2PhysRegMapSlot(VirtReg))
+        MI->getOperand(0).setReg(0U);
+    }
+
     // Get the used operands into registers.  This has the potential to spill
     // incoming values if we are out of registers.  Note that we completely
     // ignore physical register uses here.  We assume that if an explicit
@@ -965,13 +983,26 @@ void RALocal::AllocateBasicBlock(MachineBasicBlock &MBB) {
 
         // If DestVirtReg already has a value, use it.
         if (!(DestPhysReg = getVirt2PhysRegMapSlot(DestVirtReg))) {
+          // If this is a copy try to reuse the input as the output;
+          // that will make the copy go away.
           // If this is a copy, the source reg is a phys reg, and
           // that reg is available, use that phys reg for DestPhysReg.
+          // If this is a copy, the source reg is a virtual reg, and
+          // the phys reg that was assigned to that virtual reg is now
+          // available, use that phys reg for DestPhysReg.  (If it's now
+          // available that means this was the last use of the source.)
           if (isCopy &&
               TargetRegisterInfo::isPhysicalRegister(SrcCopyReg) &&
               isPhysRegAvailable(SrcCopyReg)) {
             DestPhysReg = SrcCopyReg;
             assignVirtToPhysReg(DestVirtReg, DestPhysReg);
+          } else if (isCopy &&
+              TargetRegisterInfo::isVirtualRegister(SrcCopyReg) &&
+              SrcCopyPhysReg && isPhysRegAvailable(SrcCopyPhysReg) &&
+              MF->getRegInfo().getRegClass(DestVirtReg)->
+                               contains(SrcCopyPhysReg)) {
+            DestPhysReg = SrcCopyPhysReg;
+            assignVirtToPhysReg(DestVirtReg, DestPhysReg);
           } else
             DestPhysReg = getReg(MBB, MI, DestVirtReg);
         }
diff --git a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
index 74e155f..2701faf 100644
--- a/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/RegAllocPBQP.cpp
@@ -411,16 +411,16 @@ PBQPRegAlloc::CoalesceMap PBQPRegAlloc::findCoalesces() {
       // We also need any physical regs to be allocable, coalescing with
       // a non-allocable register is invalid.
       if (srcRegIsPhysical) {
-        if (std::find(srcRegClass->allocation_order_begin(*mf),
-                      srcRegClass->allocation_order_end(*mf), srcReg) ==
-            srcRegClass->allocation_order_end(*mf))
+        if (std::find(dstRegClass->allocation_order_begin(*mf),
+                      dstRegClass->allocation_order_end(*mf), srcReg) ==
+            dstRegClass->allocation_order_end(*mf))
           continue;
       }
 
       if (dstRegIsPhysical) {
-        if (std::find(dstRegClass->allocation_order_begin(*mf),
-                      dstRegClass->allocation_order_end(*mf), dstReg) ==
-            dstRegClass->allocation_order_end(*mf))
+        if (std::find(srcRegClass->allocation_order_begin(*mf),
+                      srcRegClass->allocation_order_end(*mf), dstReg) ==
+            srcRegClass->allocation_order_end(*mf))
           continue;
       }
 
@@ -442,6 +442,12 @@ PBQPRegAlloc::CoalesceMap PBQPRegAlloc::findCoalesces() {
                vniItr = srcLI->vni_begin(), vniEnd = srcLI->vni_end();
                vniItr != vniEnd; ++vniItr) {
 
+          // If we find a poorly defined def we err on the side of caution.
+          if (!(*vniItr)->def.isValid()) {
+            badDef = true;
+            break;
+          }
+
           // If we find a def that kills the coalescing opportunity then
           // record it and break from the loop.
           if (dstLI->liveAt((*vniItr)->def)) {
@@ -463,6 +469,11 @@ PBQPRegAlloc::CoalesceMap PBQPRegAlloc::findCoalesces() {
           if ((*vniItr)->getCopy() == instr)
             continue;
 
+          if (!(*vniItr)->def.isValid()) {
+            badDef = true;
+            break;
+          }
+
           if (srcLI->liveAt((*vniItr)->def)) {
             badDef = true;
             break;
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index 640bdc0..9189e71 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -2640,7 +2640,7 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
 
       // If the shift is not a no-op (in which case this should be just a sign
       // extend already), the truncated to type is legal, sign_extend is legal
-      // on that type, and the the truncate to that type is both legal and free,
+      // on that type, and the truncate to that type is both legal and free,
       // perform the transform.
       if ((ShiftAmt > 0) &&
           TLI.isOperationLegalOrCustom(ISD::SIGN_EXTEND, TruncVT) &&
@@ -5403,12 +5403,16 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) {
   SDValue InVec = N->getOperand(0);
 
  if (InVec.getOpcode() == ISD::SCALAR_TO_VECTOR) {
-   // If the operand is wider than the vector element type then it is implicitly
-   // truncated.  Make that explicit here.
+   // Check if the result type doesn't match the inserted element type. A
+   // SCALAR_TO_VECTOR may truncate the inserted element and the
+   // EXTRACT_VECTOR_ELT may widen the extracted vector.
    EVT EltVT = InVec.getValueType().getVectorElementType();
    SDValue InOp = InVec.getOperand(0);
-   if (InOp.getValueType() != EltVT)
-     return DAG.getNode(ISD::TRUNCATE, InVec.getDebugLoc(), EltVT, InOp);
+   EVT NVT = N->getValueType(0);
+   if (InOp.getValueType() != NVT) {
+     assert(InOp.getValueType().isInteger() && NVT.isInteger());
+     return DAG.getSExtOrTrunc(InOp, InVec.getDebugLoc(), NVT);
+   }
    return InOp;
  }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
index b7dbd03..35ef5b7 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
@@ -121,7 +121,7 @@ unsigned FastISel::getRegForValue(Value *V) {
     Reg = LocalValueMap[CE];
   } else if (isa<UndefValue>(V)) {
     Reg = createResultReg(TLI.getRegClassFor(VT));
-    BuildMI(MBB, DL, TII.get(TargetInstrInfo::IMPLICIT_DEF), Reg);
+    BuildMI(MBB, DL, TII.get(TargetOpcode::IMPLICIT_DEF), Reg);
   }
   
   // If target-independent code couldn't handle the value, give target-specific
@@ -332,6 +332,8 @@ bool FastISel::SelectCall(User *I) {
       return true;
 
     Value *Address = DI->getAddress();
+    if (!Address)
+      return true;
     AllocaInst *AI = dyn_cast<AllocaInst>(Address);
     // Don't handle byval struct arguments or VLAs, for example.
     if (!AI) break;
@@ -343,7 +345,7 @@ bool FastISel::SelectCall(User *I) {
       if (MDNode *Dbg = DI->getMetadata("dbg"))
         MMI->setVariableDbgInfo(DI->getVariable(), FI, Dbg);
     }
-    // Building the map above is target independent.  Generating DEBUG_VALUE
+    // Building the map above is target independent.  Generating DBG_VALUE
     // inline is target dependent; do this now.
     (void)TargetSelectInstruction(cast<Instruction>(I));
     return true;
@@ -969,7 +971,7 @@ unsigned FastISel::FastEmitInst_extractsubreg(MVT RetVT,
   const TargetRegisterClass* RC = MRI.getRegClass(Op0);
   
   unsigned ResultReg = createResultReg(TLI.getRegClassFor(RetVT));
-  const TargetInstrDesc &II = TII.get(TargetInstrInfo::EXTRACT_SUBREG);
+  const TargetInstrDesc &II = TII.get(TargetOpcode::EXTRACT_SUBREG);
   
   if (II.getNumDefs() >= 1)
     BuildMI(MBB, DL, II, ResultReg).addReg(Op0).addImm(Idx);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
index dc7d82d..50f4c32 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
@@ -227,7 +227,7 @@ void FunctionLoweringInfo::set(Function &fn, MachineFunction &mf,
         unsigned NumRegisters = TLI.getNumRegisters(Fn->getContext(), VT);
         const TargetInstrInfo *TII = MF->getTarget().getInstrInfo();
         for (unsigned i = 0; i != NumRegisters; ++i)
-          BuildMI(MBB, DL, TII->get(TargetInstrInfo::PHI), PHIReg + i);
+          BuildMI(MBB, DL, TII->get(TargetOpcode::PHI), PHIReg + i);
         PHIReg += NumRegisters;
       }
     }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
index 9c50936..02fe85d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
@@ -178,7 +178,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
                                        const TargetInstrDesc &II,
                                        bool IsClone, bool IsCloned,
                                        DenseMap<SDValue, unsigned> &VRBaseMap) {
-  assert(Node->getMachineOpcode() != TargetInstrInfo::IMPLICIT_DEF &&
+  assert(Node->getMachineOpcode() != TargetOpcode::IMPLICIT_DEF &&
          "IMPLICIT_DEF should have been handled as a special case elsewhere!");
 
   for (unsigned i = 0; i < II.getNumDefs(); ++i) {
@@ -236,7 +236,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node, MachineInstr *MI,
 unsigned InstrEmitter::getVR(SDValue Op,
                              DenseMap<SDValue, unsigned> &VRBaseMap) {
   if (Op.isMachineOpcode() &&
-      Op.getMachineOpcode() == TargetInstrInfo::IMPLICIT_DEF) {
+      Op.getMachineOpcode() == TargetOpcode::IMPLICIT_DEF) {
     // Add an IMPLICIT_DEF instruction before every use.
     unsigned VReg = getDstOfOnlyCopyToRegUse(Op.getNode(), Op.getResNo());
     // IMPLICIT_DEF can produce any type of result so its TargetInstrDesc
@@ -246,7 +246,7 @@ unsigned InstrEmitter::getVR(SDValue Op,
       VReg = MRI->createVirtualRegister(RC);
     }
     BuildMI(MBB, Op.getDebugLoc(),
-            TII->get(TargetInstrInfo::IMPLICIT_DEF), VReg);
+            TII->get(TargetOpcode::IMPLICIT_DEF), VReg);
     return VReg;
   }
 
@@ -396,12 +396,12 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
     }
   }
   
-  if (Opc == TargetInstrInfo::EXTRACT_SUBREG) {
+  if (Opc == TargetOpcode::EXTRACT_SUBREG) {
     unsigned SubIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
 
     // Create the extract_subreg machine instruction.
     MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(),
-                               TII->get(TargetInstrInfo::EXTRACT_SUBREG));
+                               TII->get(TargetOpcode::EXTRACT_SUBREG));
 
     // Figure out the register class to create for the destreg.
     unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
@@ -424,8 +424,8 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
     AddOperand(MI, Node->getOperand(0), 0, 0, VRBaseMap);
     MI->addOperand(MachineOperand::CreateImm(SubIdx));
     MBB->insert(InsertPos, MI);
-  } else if (Opc == TargetInstrInfo::INSERT_SUBREG ||
-             Opc == TargetInstrInfo::SUBREG_TO_REG) {
+  } else if (Opc == TargetOpcode::INSERT_SUBREG ||
+             Opc == TargetOpcode::SUBREG_TO_REG) {
     SDValue N0 = Node->getOperand(0);
     SDValue N1 = Node->getOperand(1);
     SDValue N2 = Node->getOperand(2);
@@ -452,7 +452,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
     
     // If creating a subreg_to_reg, then the first input operand
     // is an implicit value immediate, otherwise it's a register
-    if (Opc == TargetInstrInfo::SUBREG_TO_REG) {
+    if (Opc == TargetOpcode::SUBREG_TO_REG) {
       const ConstantSDNode *SD = cast<ConstantSDNode>(N0);
       MI->addOperand(MachineOperand::CreateImm(SD->getZExtValue()));
     } else
@@ -507,20 +507,20 @@ void InstrEmitter::EmitNode(SDNode *Node, bool IsClone, bool IsCloned,
     unsigned Opc = Node->getMachineOpcode();
     
     // Handle subreg insert/extract specially
-    if (Opc == TargetInstrInfo::EXTRACT_SUBREG || 
-        Opc == TargetInstrInfo::INSERT_SUBREG ||
-        Opc == TargetInstrInfo::SUBREG_TO_REG) {
+    if (Opc == TargetOpcode::EXTRACT_SUBREG || 
+        Opc == TargetOpcode::INSERT_SUBREG ||
+        Opc == TargetOpcode::SUBREG_TO_REG) {
       EmitSubregNode(Node, VRBaseMap);
       return;
     }
 
     // Handle COPY_TO_REGCLASS specially.
-    if (Opc == TargetInstrInfo::COPY_TO_REGCLASS) {
+    if (Opc == TargetOpcode::COPY_TO_REGCLASS) {
       EmitCopyToRegClassNode(Node, VRBaseMap);
       return;
     }
 
-    if (Opc == TargetInstrInfo::IMPLICIT_DEF)
+    if (Opc == TargetOpcode::IMPLICIT_DEF)
       // We want a unique VR for each IMPLICIT_DEF use.
       return;
     
@@ -640,7 +640,7 @@ void InstrEmitter::EmitNode(SDNode *Node, bool IsClone, bool IsCloned,
       
     // Create the inline asm machine instruction.
     MachineInstr *MI = BuildMI(*MF, Node->getDebugLoc(),
-                               TII->get(TargetInstrInfo::INLINEASM));
+                               TII->get(TargetOpcode::INLINEASM));
 
     // Add the asm string as an external symbol operand.
     const char *AsmStr =
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index 12a4b31..78e6e4e 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -2767,7 +2767,7 @@ void SelectionDAGLegalize::ExpandNode(SDNode *Node,
                             DAG.getIntPtrConstant(1));
     } else {
       // FIXME: We should be able to fall back to a libcall with an illegal
-      // type in some cases cases.
+      // type in some cases.
       // Also, we can fall back to a division in some cases, but that's a big
       // performance hit in the general case.
       llvm_unreachable("Don't know how to expand this operation yet!");
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index bf95bb5..e955e10 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1242,10 +1242,96 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
 
 SDValue DAGTypeLegalizer::WidenVecRes_Binary(SDNode *N) {
   // Binary op widening.
+  unsigned Opcode = N->getOpcode();
+  DebugLoc dl = N->getDebugLoc();
   EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
-  SDValue InOp1 = GetWidenedVector(N->getOperand(0));
-  SDValue InOp2 = GetWidenedVector(N->getOperand(1));
-  return DAG.getNode(N->getOpcode(), N->getDebugLoc(), WidenVT, InOp1, InOp2);
+  EVT WidenEltVT = WidenVT.getVectorElementType();
+  EVT VT = WidenVT;
+  unsigned NumElts =  VT.getVectorNumElements();
+  while (!TLI.isTypeLegal(VT) && NumElts != 1) {
+     NumElts = NumElts / 2;
+     VT = EVT::getVectorVT(*DAG.getContext(), WidenEltVT, NumElts);
+  }
+
+  if (NumElts != 1 && !TLI.canOpTrap(N->getOpcode(), VT)) {
+    // Operation doesn't trap so just widen as normal.
+    SDValue InOp1 = GetWidenedVector(N->getOperand(0));
+    SDValue InOp2 = GetWidenedVector(N->getOperand(1));
+    return DAG.getNode(N->getOpcode(), dl, WidenVT, InOp1, InOp2);
+  } else if (NumElts == 1) {
+    // No legal vector version so unroll the vector operation and then widen.
+    return DAG.UnrollVectorOp(N, WidenVT.getVectorNumElements());
+  } else {
+    // Since the operation can trap, apply operation on the original vector.
+    SDValue InOp1 = GetWidenedVector(N->getOperand(0));
+    SDValue InOp2 = GetWidenedVector(N->getOperand(1));
+    unsigned CurNumElts = N->getValueType(0).getVectorNumElements();
+
+    SmallVector<SDValue, 16> ConcatOps(CurNumElts);
+    unsigned ConcatEnd = 0;  // Current ConcatOps index.
+    unsigned Idx = 0;        // Current Idx into input vectors.
+    while (CurNumElts != 0) {
+      while (CurNumElts >= NumElts) {
+        SDValue EOp1 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, InOp1,
+                                   DAG.getIntPtrConstant(Idx));
+        SDValue EOp2 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, InOp2,
+                                   DAG.getIntPtrConstant(Idx));
+        ConcatOps[ConcatEnd++] = DAG.getNode(Opcode, dl, VT, EOp1, EOp2);
+        Idx += NumElts;
+        CurNumElts -= NumElts;
+      }
+      EVT PrevVecVT = VT;
+      do {
+        NumElts = NumElts / 2;
+        VT = EVT::getVectorVT(*DAG.getContext(), WidenEltVT, NumElts);
+      } while (!TLI.isTypeLegal(VT) && NumElts != 1);
+
+      if (NumElts == 1) {
+        // Since we are using concat vector, build a vector from the scalar ops.
+        SDValue VecOp = DAG.getUNDEF(PrevVecVT);
+        for (unsigned i = 0; i != CurNumElts; ++i, ++Idx) {
+          SDValue EOp1 = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, WidenEltVT, 
+                                     InOp1, DAG.getIntPtrConstant(Idx));
+          SDValue EOp2 = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, WidenEltVT, 
+                                     InOp2, DAG.getIntPtrConstant(Idx));
+          VecOp = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, PrevVecVT, VecOp,
+                              DAG.getNode(Opcode, dl, WidenEltVT, EOp1, EOp2),
+                              DAG.getIntPtrConstant(i));
+        }
+        CurNumElts = 0;
+        ConcatOps[ConcatEnd++] = VecOp;
+      }
+    }
+
+    // Check to see if we have a single operation with the widen type.
+    if (ConcatEnd == 1) {
+      VT = ConcatOps[0].getValueType();
+      if (VT == WidenVT)
+        return ConcatOps[0];
+    }
+
+    // Rebuild vector to one with the widen type
+    Idx = ConcatEnd - 1;
+    while (Idx != 0) {
+      VT = ConcatOps[Idx--].getValueType();
+      while (Idx != 0 && ConcatOps[Idx].getValueType() == VT)
+        --Idx;
+      if (Idx != 0) {
+        VT = ConcatOps[Idx].getValueType();
+        ConcatOps[Idx+1] = DAG.getNode(ISD::CONCAT_VECTORS, dl, VT,
+                                     &ConcatOps[Idx+1], ConcatEnd - Idx - 1);
+        ConcatEnd = Idx + 2;
+      }
+    }
+    
+    unsigned NumOps = WidenVT.getVectorNumElements()/VT.getVectorNumElements();
+    if (NumOps != ConcatEnd ) {
+      SDValue UndefVal = DAG.getUNDEF(VT);
+      for (unsigned j = ConcatEnd; j < NumOps; ++j)
+        ConcatOps[j] = UndefVal;
+    }
+    return DAG.getNode(ISD::CONCAT_VECTORS, dl, WidenVT, &ConcatOps[0], NumOps);
+  }
 }
 
 SDValue DAGTypeLegalizer::WidenVecRes_Convert(SDNode *N) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
index dea5993..3f1766d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
@@ -345,6 +345,15 @@ void ScheduleDAGRRList::BacktrackBottomUp(SUnit *SU, unsigned BtCycle,
   ++NumBacktracks;
 }
 
+static bool isOperandOf(const SUnit *SU, SDNode *N) {
+  for (const SDNode *SUNode = SU->getNode(); SUNode;
+       SUNode = SUNode->getFlaggedNode()) {
+    if (SUNode->isOperandOf(N))
+      return true;
+  }
+  return false;
+}
+
 /// CopyAndMoveSuccessors - Clone the specified node and move its scheduled
 /// successors to the newly created node.
 SUnit *ScheduleDAGRRList::CopyAndMoveSuccessors(SUnit *SU) {
@@ -427,8 +436,7 @@ SUnit *ScheduleDAGRRList::CopyAndMoveSuccessors(SUnit *SU) {
          I != E; ++I) {
       if (I->isCtrl())
         ChainPreds.push_back(*I);
-      else if (I->getSUnit()->getNode() &&
-               I->getSUnit()->getNode()->isOperandOf(LoadNode))
+      else if (isOperandOf(I->getSUnit(), LoadNode))
         LoadPreds.push_back(*I);
       else
         NodePreds.push_back(*I);
@@ -1034,9 +1042,9 @@ namespace {
         // CopyToReg should be close to its uses to facilitate coalescing and
         // avoid spilling.
         return 0;
-      if (Opc == TargetInstrInfo::EXTRACT_SUBREG ||
-          Opc == TargetInstrInfo::SUBREG_TO_REG ||
-          Opc == TargetInstrInfo::INSERT_SUBREG)
+      if (Opc == TargetOpcode::EXTRACT_SUBREG ||
+          Opc == TargetOpcode::SUBREG_TO_REG ||
+          Opc == TargetOpcode::INSERT_SUBREG)
         // EXTRACT_SUBREG, INSERT_SUBREG, and SUBREG_TO_REG nodes should be
         // close to their uses to facilitate coalescing.
         return 0;
@@ -1437,7 +1445,7 @@ void RegReductionPriorityQueue<SF>::AddPseudoTwoAddrDeps() {
         while (SuccSU->Succs.size() == 1 &&
                SuccSU->getNode()->isMachineOpcode() &&
                SuccSU->getNode()->getMachineOpcode() ==
-                 TargetInstrInfo::COPY_TO_REGCLASS)
+                 TargetOpcode::COPY_TO_REGCLASS)
           SuccSU = SuccSU->Succs.front().getSUnit();
         // Don't constrain non-instruction nodes.
         if (!SuccSU->getNode() || !SuccSU->getNode()->isMachineOpcode())
@@ -1451,9 +1459,9 @@ void RegReductionPriorityQueue<SF>::AddPseudoTwoAddrDeps() {
         // Don't constrain EXTRACT_SUBREG, INSERT_SUBREG, and SUBREG_TO_REG;
         // these may be coalesced away. We want them close to their uses.
         unsigned SuccOpc = SuccSU->getNode()->getMachineOpcode();
-        if (SuccOpc == TargetInstrInfo::EXTRACT_SUBREG ||
-            SuccOpc == TargetInstrInfo::INSERT_SUBREG ||
-            SuccOpc == TargetInstrInfo::SUBREG_TO_REG)
+        if (SuccOpc == TargetOpcode::EXTRACT_SUBREG ||
+            SuccOpc == TargetOpcode::INSERT_SUBREG ||
+            SuccOpc == TargetOpcode::SUBREG_TO_REG)
           continue;
         if ((!canClobber(SuccSU, DUSU) ||
              (hasCopyToRegUse(SU) && !hasCopyToRegUse(SuccSU)) ||
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index f1b6f1e..6122a2a 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -1925,19 +1925,28 @@ void SelectionDAG::ComputeMaskedBits(SDValue Op, const APInt &Mask,
   }
   case ISD::SREM:
     if (ConstantSDNode *Rem = dyn_cast<ConstantSDNode>(Op.getOperand(1))) {
-      const APInt &RA = Rem->getAPIntValue();
-      if (RA.isPowerOf2() || (-RA).isPowerOf2()) {
-        APInt LowBits = RA.isStrictlyPositive() ? (RA - 1) : ~RA;
+      const APInt &RA = Rem->getAPIntValue().abs();
+      if (RA.isPowerOf2()) {
+        APInt LowBits = RA - 1;
         APInt Mask2 = LowBits | APInt::getSignBit(BitWidth);
         ComputeMaskedBits(Op.getOperand(0), Mask2,KnownZero2,KnownOne2,Depth+1);
 
-        // If the sign bit of the first operand is zero, the sign bit of
-        // the result is zero. If the first operand has no one bits below
-        // the second operand's single 1 bit, its sign will be zero.
+        // The low bits of the first operand are unchanged by the srem.
+        KnownZero = KnownZero2 & LowBits;
+        KnownOne = KnownOne2 & LowBits;
+
+        // If the first operand is non-negative or has all low bits zero, then
+        // the upper bits are all zero.
         if (KnownZero2[BitWidth-1] || ((KnownZero2 & LowBits) == LowBits))
-          KnownZero2 |= ~LowBits;
+          KnownZero |= ~LowBits;
 
-        KnownZero |= KnownZero2 & Mask;
+        // If the first operand is negative and not all low bits are zero, then
+        // the upper bits are all one.
+        if (KnownOne2[BitWidth-1] && ((KnownOne2 & LowBits) != 0))
+          KnownOne |= ~LowBits;
+
+        KnownZero &= Mask;
+        KnownOne &= Mask;
 
         assert((KnownZero & KnownOne) == 0&&"Bits known to be one AND zero?");
       }
@@ -2755,13 +2764,16 @@ SDValue SelectionDAG::getNode(unsigned Opcode, DebugLoc DL, EVT VT,
     // EXTRACT_VECTOR_ELT of INSERT_VECTOR_ELT is often formed when vector
     // operations are lowered to scalars.
     if (N1.getOpcode() == ISD::INSERT_VECTOR_ELT) {
-      // If the indices are the same, return the inserted element.
-      if (N1.getOperand(2) == N2)
-        return N1.getOperand(1);
-      // If the indices are known different, extract the element from
+      // If the indices are the same, return the inserted element else
+      // if the indices are known different, extract the element from
       // the original vector.
-      else if (isa<ConstantSDNode>(N1.getOperand(2)) &&
-               isa<ConstantSDNode>(N2))
+      if (N1.getOperand(2) == N2) {
+        if (VT == N1.getOperand(1).getValueType())
+          return N1.getOperand(1);
+        else
+          return getSExtOrTrunc(N1.getOperand(1), DL, VT);
+      } else if (isa<ConstantSDNode>(N1.getOperand(2)) &&
+                 isa<ConstantSDNode>(N2))
         return getNode(ISD::EXTRACT_VECTOR_ELT, DL, VT, N1.getOperand(0), N2);
     }
     break;
@@ -4860,23 +4872,23 @@ SelectionDAG::getMachineNode(unsigned Opcode, DebugLoc DL, SDVTList VTs,
 }
 
 /// getTargetExtractSubreg - A convenience function for creating
-/// TargetInstrInfo::EXTRACT_SUBREG nodes.
+/// TargetOpcode::EXTRACT_SUBREG nodes.
 SDValue
 SelectionDAG::getTargetExtractSubreg(int SRIdx, DebugLoc DL, EVT VT,
                                      SDValue Operand) {
   SDValue SRIdxVal = getTargetConstant(SRIdx, MVT::i32);
-  SDNode *Subreg = getMachineNode(TargetInstrInfo::EXTRACT_SUBREG, DL,
+  SDNode *Subreg = getMachineNode(TargetOpcode::EXTRACT_SUBREG, DL,
                                   VT, Operand, SRIdxVal);
   return SDValue(Subreg, 0);
 }
 
 /// getTargetInsertSubreg - A convenience function for creating
-/// TargetInstrInfo::INSERT_SUBREG nodes.
+/// TargetOpcode::INSERT_SUBREG nodes.
 SDValue
 SelectionDAG::getTargetInsertSubreg(int SRIdx, DebugLoc DL, EVT VT,
                                     SDValue Operand, SDValue Subreg) {
   SDValue SRIdxVal = getTargetConstant(SRIdx, MVT::i32);
-  SDNode *Result = getMachineNode(TargetInstrInfo::INSERT_SUBREG, DL,
+  SDNode *Result = getMachineNode(TargetOpcode::INSERT_SUBREG, DL,
                                   VT, Operand, Subreg, SRIdxVal);
   return SDValue(Result, 0);
 }
@@ -5212,11 +5224,12 @@ unsigned SelectionDAG::AssignTopologicalOrder() {
       }
     }
     if (I == SortedPos) {
-      allnodes_iterator J = I;
-      SDNode *S = ++J;
-      dbgs() << "Offending node:\n";
+#ifndef NDEBUG
+      SDNode *S = ++I;
+      dbgs() << "Overran sorted position:\n";
       S->dumprFull();
-      assert(0 && "Overran sorted position");
+#endif
+      llvm_unreachable(0);
     }
   }
 
@@ -5237,7 +5250,7 @@ unsigned SelectionDAG::AssignTopologicalOrder() {
 }
 
 /// AssignOrdering - Assign an order to the SDNode.
-void SelectionDAG::AssignOrdering(SDNode *SD, unsigned Order) {
+void SelectionDAG::AssignOrdering(const SDNode *SD, unsigned Order) {
   assert(SD && "Trying to assign an order to a null node!");
   Ordering->add(SD, Order);
 }
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index dddf385..6f60c7b 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -131,6 +131,17 @@ namespace {
       }
     }
 
+    /// areValueTypesLegal - Return true if types of all the values are legal.
+    bool areValueTypesLegal() {
+      for (unsigned Value = 0, e = ValueVTs.size(); Value != e; ++Value) {
+        EVT RegisterVT = RegVTs[Value];
+        if (!TLI->isTypeLegal(RegisterVT))
+          return false;
+      }
+      return true;
+    }
+
+
     /// append - Add the specified values to this one.
     void append(const RegsForValue &RHS) {
       TLI = RHS.TLI;
@@ -176,7 +187,6 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
   assert(NumParts > 0 && "No parts to assemble!");
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
   SDValue Val = Parts[0];
-  DAG.AssignOrdering(Val.getNode(), Order);
 
   if (NumParts > 1) {
     // Assemble the value from multiple parts.
@@ -209,10 +219,6 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
 
       Val = DAG.getNode(ISD::BUILD_PAIR, dl, RoundVT, Lo, Hi);
 
-      DAG.AssignOrdering(Lo.getNode(), Order);
-      DAG.AssignOrdering(Hi.getNode(), Order);
-      DAG.AssignOrdering(Val.getNode(), Order);
-
       if (RoundParts < NumParts) {
         // Assemble the trailing non-power-of-2 part.
         unsigned OddParts = NumParts - RoundParts;
@@ -226,15 +232,11 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
           std::swap(Lo, Hi);
         EVT TotalVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
         Hi = DAG.getNode(ISD::ANY_EXTEND, dl, TotalVT, Hi);
-        DAG.AssignOrdering(Hi.getNode(), Order);
         Hi = DAG.getNode(ISD::SHL, dl, TotalVT, Hi,
                          DAG.getConstant(Lo.getValueType().getSizeInBits(),
                                          TLI.getPointerTy()));
-        DAG.AssignOrdering(Hi.getNode(), Order);
         Lo = DAG.getNode(ISD::ZERO_EXTEND, dl, TotalVT, Lo);
-        DAG.AssignOrdering(Lo.getNode(), Order);
         Val = DAG.getNode(ISD::OR, dl, TotalVT, Lo, Hi);
-        DAG.AssignOrdering(Val.getNode(), Order);
       }
     } else if (ValueVT.isVector()) {
       // Handle a multi-element vector.
@@ -275,7 +277,6 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       Val = DAG.getNode(IntermediateVT.isVector() ?
                         ISD::CONCAT_VECTORS : ISD::BUILD_VECTOR, dl,
                         ValueVT, &Ops[0], NumIntermediates);
-      DAG.AssignOrdering(Val.getNode(), Order);
     } else if (PartVT.isFloatingPoint()) {
       // FP split into multiple FP parts (for ppcf128)
       assert(ValueVT == EVT(MVT::ppcf128) && PartVT == EVT(MVT::f64) &&
@@ -286,10 +287,6 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       if (TLI.isBigEndian())
         std::swap(Lo, Hi);
       Val = DAG.getNode(ISD::BUILD_PAIR, dl, ValueVT, Lo, Hi);
-
-      DAG.AssignOrdering(Hi.getNode(), Order);
-      DAG.AssignOrdering(Lo.getNode(), Order);
-      DAG.AssignOrdering(Val.getNode(), Order);
     } else {
       // FP split into integer parts (soft fp)
       assert(ValueVT.isFloatingPoint() && PartVT.isInteger() &&
@@ -307,18 +304,14 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
 
   if (PartVT.isVector()) {
     assert(ValueVT.isVector() && "Unknown vector conversion!");
-    SDValue Res = DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
-    DAG.AssignOrdering(Res.getNode(), Order);
-    return Res;
+    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
   }
 
   if (ValueVT.isVector()) {
     assert(ValueVT.getVectorElementType() == PartVT &&
            ValueVT.getVectorNumElements() == 1 &&
            "Only trivial scalar-to-vector conversions should get here!");
-    SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
-    DAG.AssignOrdering(Res.getNode(), Order);
-    return Res;
+    return DAG.getNode(ISD::BUILD_VECTOR, dl, ValueVT, Val);
   }
 
   if (PartVT.isInteger() &&
@@ -330,36 +323,24 @@ static SDValue getCopyFromParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       if (AssertOp != ISD::DELETED_NODE)
         Val = DAG.getNode(AssertOp, dl, PartVT, Val,
                           DAG.getValueType(ValueVT));
-      DAG.AssignOrdering(Val.getNode(), Order);
-      Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
-      DAG.AssignOrdering(Val.getNode(), Order);
-      return Val;
+      return DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
     } else {
-      Val = DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
-      DAG.AssignOrdering(Val.getNode(), Order);
-      return Val;
+      return DAG.getNode(ISD::ANY_EXTEND, dl, ValueVT, Val);
     }
   }
 
   if (PartVT.isFloatingPoint() && ValueVT.isFloatingPoint()) {
     if (ValueVT.bitsLT(Val.getValueType())) {
       // FP_ROUND's are always exact here.
-      Val = DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
-                        DAG.getIntPtrConstant(1));
-      DAG.AssignOrdering(Val.getNode(), Order);
-      return Val;
+      return DAG.getNode(ISD::FP_ROUND, dl, ValueVT, Val,
+                         DAG.getIntPtrConstant(1));
     }
 
-    Val = DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
-    DAG.AssignOrdering(Val.getNode(), Order);
-    return Val;
+    return DAG.getNode(ISD::FP_EXTEND, dl, ValueVT, Val);
   }
 
-  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits()) {
-    Val = DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
-    DAG.AssignOrdering(Val.getNode(), Order);
-    return Val;
-  }
+  if (PartVT.getSizeInBits() == ValueVT.getSizeInBits())
+    return DAG.getNode(ISD::BIT_CONVERT, dl, ValueVT, Val);
 
   llvm_unreachable("Unknown mismatch!");
   return SDValue();
@@ -414,8 +395,6 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       }
     }
 
-    DAG.AssignOrdering(Val.getNode(), Order);
-
     // The value may have changed - recompute ValueVT.
     ValueVT = Val.getValueType();
     assert(NumParts * PartBits == ValueVT.getSizeInBits() &&
@@ -448,9 +427,6 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       NumParts = RoundParts;
       ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
       Val = DAG.getNode(ISD::TRUNCATE, dl, ValueVT, Val);
-
-      DAG.AssignOrdering(OddVal.getNode(), Order);
-      DAG.AssignOrdering(Val.getNode(), Order);
     }
 
     // The number of parts is a power of 2.  Repeatedly bisect the value using
@@ -460,8 +436,6 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
                                              ValueVT.getSizeInBits()),
                            Val);
 
-    DAG.AssignOrdering(Parts[0].getNode(), Order);
-
     for (unsigned StepSize = NumParts; StepSize > 1; StepSize /= 2) {
       for (unsigned i = 0; i < NumParts; i += StepSize) {
         unsigned ThisBits = StepSize * PartBits / 2;
@@ -476,16 +450,11 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
                             ThisVT, Part0,
                             DAG.getConstant(0, PtrVT));
 
-        DAG.AssignOrdering(Part0.getNode(), Order);
-        DAG.AssignOrdering(Part1.getNode(), Order);
-
         if (ThisBits == PartBits && ThisVT != PartVT) {
           Part0 = DAG.getNode(ISD::BIT_CONVERT, dl,
                                                 PartVT, Part0);
           Part1 = DAG.getNode(ISD::BIT_CONVERT, dl,
                                                 PartVT, Part1);
-          DAG.AssignOrdering(Part0.getNode(), Order);
-          DAG.AssignOrdering(Part1.getNode(), Order);
         }
       }
     }
@@ -511,7 +480,6 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       }
     }
 
-    DAG.AssignOrdering(Val.getNode(), Order);
     Parts[0] = Val;
     return;
   }
@@ -539,8 +507,6 @@ static void getCopyToParts(SelectionDAG &DAG, DebugLoc dl, unsigned Order,
       Ops[i] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
                            IntermediateVT, Val,
                            DAG.getConstant(i, PtrVT));
-
-    DAG.AssignOrdering(Ops[i].getNode(), Order);
   }
 
   // Split the intermediate operands into legal parts.
@@ -638,23 +604,34 @@ SDValue SelectionDAGBuilder::getControlRoot() {
   return Root;
 }
 
+void SelectionDAGBuilder::AssignOrderingToNode(const SDNode *Node) {
+  if (DAG.GetOrdering(Node) != 0) return; // Already has ordering.
+  DAG.AssignOrdering(Node, SDNodeOrder);
+
+  for (unsigned I = 0, E = Node->getNumOperands(); I != E; ++I)
+    AssignOrderingToNode(Node->getOperand(I).getNode());
+}
+
 void SelectionDAGBuilder::visit(Instruction &I) {
   visit(I.getOpcode(), I);
 }
 
 void SelectionDAGBuilder::visit(unsigned Opcode, User &I) {
-  // We're processing a new instruction.
-  ++SDNodeOrder;
-
   // Note: this doesn't use InstVisitor, because it has to work with
   // ConstantExpr's in addition to instructions.
   switch (Opcode) {
   default: llvm_unreachable("Unknown instruction type encountered!");
     // Build the switch statement using the Instruction.def file.
 #define HANDLE_INST(NUM, OPCODE, CLASS) \
-  case Instruction::OPCODE: return visit##OPCODE((CLASS&)I);
+    case Instruction::OPCODE: visit##OPCODE((CLASS&)I); break;
 #include "llvm/Instruction.def"
   }
+
+  // Assign the ordering to the freshly created DAG nodes.
+  if (NodeMap.count(&I)) {
+    ++SDNodeOrder;
+    AssignOrderingToNode(getValue(&I).getNode());
+  }
 }
 
 SDValue SelectionDAGBuilder::getValue(const Value *V) {
@@ -699,10 +676,8 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
           Constants.push_back(SDValue(Val, i));
       }
 
-      SDValue Res = DAG.getMergeValues(&Constants[0], Constants.size(),
-                                       getCurDebugLoc());
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
-      return Res;
+      return DAG.getMergeValues(&Constants[0], Constants.size(),
+                                getCurDebugLoc());
     }
 
     if (isa<StructType>(C->getType()) || isa<ArrayType>(C->getType())) {
@@ -725,10 +700,8 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
           Constants[i] = DAG.getConstant(0, EltVT);
       }
 
-      SDValue Res = DAG.getMergeValues(&Constants[0], NumElts,
-                                       getCurDebugLoc());
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
-      return Res;
+      return DAG.getMergeValues(&Constants[0], NumElts,
+                                getCurDebugLoc());
     }
 
     if (BlockAddress *BA = dyn_cast<BlockAddress>(C))
@@ -756,10 +729,8 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
     }
 
     // Create a BUILD_VECTOR node.
-    SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                              VT, &Ops[0], Ops.size());
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
-    return NodeMap[V] = Res;
+    return NodeMap[V] = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                                    VT, &Ops[0], Ops.size());
   }
 
   // If this is a static alloca, generate it as the frameindex instead of
@@ -874,15 +845,10 @@ void SelectionDAGBuilder::visitRet(ReturnInst &I) {
         DAG.getStore(Chain, getCurDebugLoc(),
                      SDValue(RetOp.getNode(), RetOp.getResNo() + i),
                      Add, NULL, Offsets[i], false, 0);
-
-      DAG.AssignOrdering(Add.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Chains[i].getNode(), SDNodeOrder);
     }
 
     Chain = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
                         MVT::Other, &Chains[0], NumValues);
-
-    DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
   } else {
     for (unsigned i = 0, e = I.getNumOperands(); i != e; ++i) {
       SmallVector<EVT, 4> ValueVTs;
@@ -948,7 +914,6 @@ void SelectionDAGBuilder::visitRet(ReturnInst &I) {
 
   // Update the DAG with the new chain value resulting from return lowering.
   DAG.setRoot(Chain);
-  DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
 }
 
 /// CopyToExportRegsIfNeeded - If the given value has virtual registers
@@ -1209,13 +1174,10 @@ void SelectionDAGBuilder::visitBr(BranchInst &I) {
     CurMBB->addSuccessor(Succ0MBB);
 
     // If this is not a fall-through branch, emit the branch.
-    if (Succ0MBB != NextBlock) {
-      SDValue V = DAG.getNode(ISD::BR, getCurDebugLoc(),
+    if (Succ0MBB != NextBlock)
+      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
                               MVT::Other, getControlRoot(),
-                              DAG.getBasicBlock(Succ0MBB));
-      DAG.setRoot(V);
-      DAG.AssignOrdering(V.getNode(), SDNodeOrder);
-    }
+                              DAG.getBasicBlock(Succ0MBB)));
 
     return;
   }
@@ -1321,8 +1283,6 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
     }
   }
 
-  DAG.AssignOrdering(Cond.getNode(), SDNodeOrder);
-
   // Update successor info
   CurMBB->addSuccessor(CB.TrueBB);
   CurMBB->addSuccessor(CB.FalseBB);
@@ -1340,13 +1300,11 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
     std::swap(CB.TrueBB, CB.FalseBB);
     SDValue True = DAG.getConstant(1, Cond.getValueType());
     Cond = DAG.getNode(ISD::XOR, dl, Cond.getValueType(), Cond, True);
-    DAG.AssignOrdering(Cond.getNode(), SDNodeOrder);
   }
 
   SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
                                MVT::Other, getControlRoot(), Cond,
                                DAG.getBasicBlock(CB.TrueBB));
-  DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
 
   // If the branch was constant folded, fix up the CFG.
   if (BrCond.getOpcode() == ISD::BR) {
@@ -1356,12 +1314,9 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB) {
     if (BrCond == getControlRoot())
       CurMBB->removeSuccessor(CB.TrueBB);
 
-    if (CB.FalseBB != NextBlock) {
+    if (CB.FalseBB != NextBlock)
       BrCond = DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
                            DAG.getBasicBlock(CB.FalseBB));
-
-      DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
-    }
   }
 
   DAG.setRoot(BrCond);
@@ -1379,10 +1334,6 @@ void SelectionDAGBuilder::visitJumpTable(JumpTable &JT) {
                                     MVT::Other, Index.getValue(1),
                                     Table, Index);
   DAG.setRoot(BrJumpTable);
-
-  DAG.AssignOrdering(Index.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(Table.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(BrJumpTable.getNode(), SDNodeOrder);
 }
 
 /// visitJumpTableHeader - This function emits necessary code to produce index
@@ -1398,7 +1349,7 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
                             DAG.getConstant(JTH.First, VT));
 
   // The SDNode we just created, which holds the value being switched on minus
-  // the the smallest case value, needs to be copied to a virtual register so it
+  // the smallest case value, needs to be copied to a virtual register so it
   // can be used as an index into the jump table in a subsequent basic block.
   // This value may be smaller or larger than the target's pointer type, and
   // therefore require extension or truncating.
@@ -1417,11 +1368,6 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
                              DAG.getConstant(JTH.Last-JTH.First,VT),
                              ISD::SETUGT);
 
-  DAG.AssignOrdering(Sub.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(SwitchOp.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(CopyTo.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(CMP.getNode(), SDNodeOrder);
-
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
@@ -1434,13 +1380,9 @@ void SelectionDAGBuilder::visitJumpTableHeader(JumpTable &JT,
                                MVT::Other, CopyTo, CMP,
                                DAG.getBasicBlock(JT.Default));
 
-  DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
-
-  if (JT.MBB != NextBlock) {
+  if (JT.MBB != NextBlock)
     BrCond = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrCond,
                          DAG.getBasicBlock(JT.MBB));
-    DAG.AssignOrdering(BrCond.getNode(), SDNodeOrder);
-  }
 
   DAG.setRoot(BrCond);
 }
@@ -1467,11 +1409,6 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B) {
   SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), getCurDebugLoc(),
                                     B.Reg, ShiftOp);
 
-  DAG.AssignOrdering(Sub.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(RangeCmp.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(ShiftOp.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(CopyTo.getNode(), SDNodeOrder);
-
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
@@ -1488,13 +1425,9 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B) {
                                 MVT::Other, CopyTo, RangeCmp,
                                 DAG.getBasicBlock(B.Default));
 
-  DAG.AssignOrdering(BrRange.getNode(), SDNodeOrder);
-
-  if (MBB != NextBlock) {
+  if (MBB != NextBlock)
     BrRange = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, CopyTo,
                           DAG.getBasicBlock(MBB));
-    DAG.AssignOrdering(BrRange.getNode(), SDNodeOrder);
-  }
 
   DAG.setRoot(BrRange);
 }
@@ -1520,11 +1453,6 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
                                 AndOp, DAG.getConstant(0, TLI.getPointerTy()),
                                 ISD::SETNE);
 
-  DAG.AssignOrdering(ShiftOp.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(SwitchVal.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(AndOp.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(AndCmp.getNode(), SDNodeOrder);
-
   CurMBB->addSuccessor(B.TargetBB);
   CurMBB->addSuccessor(NextMBB);
 
@@ -1532,8 +1460,6 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
                               MVT::Other, getControlRoot(),
                               AndCmp, DAG.getBasicBlock(B.TargetBB));
 
-  DAG.AssignOrdering(BrAnd.getNode(), SDNodeOrder);
-
   // Set NextBlock to be the MBB immediately after the current one, if any.
   // This is used to avoid emitting unnecessary branches to the next block.
   MachineBasicBlock *NextBlock = 0;
@@ -1541,11 +1467,9 @@ void SelectionDAGBuilder::visitBitTestCase(MachineBasicBlock* NextMBB,
   if (++BBI != FuncInfo.MF->end())
     NextBlock = BBI;
 
-  if (NextMBB != NextBlock) {
+  if (NextMBB != NextBlock)
     BrAnd = DAG.getNode(ISD::BR, getCurDebugLoc(), MVT::Other, BrAnd,
                         DAG.getBasicBlock(NextMBB));
-    DAG.AssignOrdering(BrAnd.getNode(), SDNodeOrder);
-  }
 
   DAG.setRoot(BrAnd);
 }
@@ -1570,11 +1494,9 @@ void SelectionDAGBuilder::visitInvoke(InvokeInst &I) {
   CurMBB->addSuccessor(LandingPad);
 
   // Drop into normal successor.
-  SDValue Branch = DAG.getNode(ISD::BR, getCurDebugLoc(),
-                               MVT::Other, getControlRoot(),
-                               DAG.getBasicBlock(Return));
-  DAG.setRoot(Branch);
-  DAG.AssignOrdering(Branch.getNode(), SDNodeOrder);
+  DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+                          MVT::Other, getControlRoot(),
+                          DAG.getBasicBlock(Return)));
 }
 
 void SelectionDAGBuilder::visitUnwind(UnwindInst &I) {
@@ -2088,13 +2010,10 @@ void SelectionDAGBuilder::visitSwitch(SwitchInst &SI) {
 
     // If this is not a fall-through branch, emit the branch.
     CurMBB->addSuccessor(Default);
-    if (Default != NextBlock) {
-      SDValue Res = DAG.getNode(ISD::BR, getCurDebugLoc(),
-                                MVT::Other, getControlRoot(),
-                                DAG.getBasicBlock(Default));
-      DAG.setRoot(Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
-    }
+    if (Default != NextBlock)
+      DAG.setRoot(DAG.getNode(ISD::BR, getCurDebugLoc(),
+                              MVT::Other, getControlRoot(),
+                              DAG.getBasicBlock(Default)));
 
     return;
   }
@@ -2143,15 +2062,19 @@ void SelectionDAGBuilder::visitSwitch(SwitchInst &SI) {
 }
 
 void SelectionDAGBuilder::visitIndirectBr(IndirectBrInst &I) {
-  // Update machine-CFG edges.
+  // Update machine-CFG edges with unique successors.
+  SmallVector<BasicBlock*, 32> succs;
+  succs.reserve(I.getNumSuccessors());
   for (unsigned i = 0, e = I.getNumSuccessors(); i != e; ++i)
-    CurMBB->addSuccessor(FuncInfo.MBBMap[I.getSuccessor(i)]);
-
-  SDValue Res = DAG.getNode(ISD::BRIND, getCurDebugLoc(),
-                            MVT::Other, getControlRoot(),
-                            getValue(I.getAddress()));
-  DAG.setRoot(Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    succs.push_back(I.getSuccessor(i));
+  array_pod_sort(succs.begin(), succs.end());
+  succs.erase(std::unique(succs.begin(), succs.end()), succs.end());
+  for (unsigned i = 0, e = succs.size(); i != e; ++i)
+    CurMBB->addSuccessor(FuncInfo.MBBMap[succs[i]]);
+
+  DAG.setRoot(DAG.getNode(ISD::BRIND, getCurDebugLoc(),
+                          MVT::Other, getControlRoot(),
+                          getValue(I.getAddress())));
 }
 
 void SelectionDAGBuilder::visitFSub(User &I) {
@@ -2166,10 +2089,8 @@ void SelectionDAGBuilder::visitFSub(User &I) {
       Constant *CNZ = ConstantVector::get(&NZ[0], NZ.size());
       if (CV == CNZ) {
         SDValue Op2 = getValue(I.getOperand(1));
-        SDValue Res = DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                                  Op2.getValueType(), Op2);
-        setValue(&I, Res);
-        DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+        setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                                 Op2.getValueType(), Op2));
         return;
       }
     }
@@ -2178,10 +2099,8 @@ void SelectionDAGBuilder::visitFSub(User &I) {
   if (ConstantFP *CFP = dyn_cast<ConstantFP>(I.getOperand(0)))
     if (CFP->isExactlyValue(ConstantFP::getNegativeZero(Ty)->getValueAPF())) {
       SDValue Op2 = getValue(I.getOperand(1));
-      SDValue Res = DAG.getNode(ISD::FNEG, getCurDebugLoc(),
-                                Op2.getValueType(), Op2);
-      setValue(&I, Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      setValue(&I, DAG.getNode(ISD::FNEG, getCurDebugLoc(),
+                               Op2.getValueType(), Op2));
       return;
     }
 
@@ -2191,10 +2110,8 @@ void SelectionDAGBuilder::visitFSub(User &I) {
 void SelectionDAGBuilder::visitBinary(User &I, unsigned OpCode) {
   SDValue Op1 = getValue(I.getOperand(0));
   SDValue Op2 = getValue(I.getOperand(1));
-  SDValue Res = DAG.getNode(OpCode, getCurDebugLoc(),
-                            Op1.getValueType(), Op1, Op2);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(OpCode, getCurDebugLoc(),
+                           Op1.getValueType(), Op1, Op2));
 }
 
 void SelectionDAGBuilder::visitShift(User &I, unsigned Opcode) {
@@ -2227,12 +2144,8 @@ void SelectionDAGBuilder::visitShift(User &I, unsigned Opcode) {
                         TLI.getPointerTy(), Op2);
   }
 
-  SDValue Res = DAG.getNode(Opcode, getCurDebugLoc(),
-                            Op1.getValueType(), Op1, Op2);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(Op2.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(Opcode, getCurDebugLoc(),
+                           Op1.getValueType(), Op1, Op2));
 }
 
 void SelectionDAGBuilder::visitICmp(User &I) {
@@ -2246,9 +2159,7 @@ void SelectionDAGBuilder::visitICmp(User &I) {
   ISD::CondCode Opcode = getICmpCondCode(predicate);
 
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Opcode));
 }
 
 void SelectionDAGBuilder::visitFCmp(User &I) {
@@ -2261,9 +2172,7 @@ void SelectionDAGBuilder::visitFCmp(User &I) {
   SDValue Op2 = getValue(I.getOperand(1));
   ISD::CondCode Condition = getFCmpCondCode(predicate);
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getSetCC(getCurDebugLoc(), DestVT, Op1, Op2, Condition));
 }
 
 void SelectionDAGBuilder::visitSelect(User &I) {
@@ -2277,7 +2186,7 @@ void SelectionDAGBuilder::visitSelect(User &I) {
   SDValue TrueVal  = getValue(I.getOperand(1));
   SDValue FalseVal = getValue(I.getOperand(2));
 
-  for (unsigned i = 0; i != NumValues; ++i) {
+  for (unsigned i = 0; i != NumValues; ++i)
     Values[i] = DAG.getNode(ISD::SELECT, getCurDebugLoc(),
                             TrueVal.getNode()->getValueType(i), Cond,
                             SDValue(TrueVal.getNode(),
@@ -2285,23 +2194,16 @@ void SelectionDAGBuilder::visitSelect(User &I) {
                             SDValue(FalseVal.getNode(),
                                     FalseVal.getResNo() + i));
 
-    DAG.AssignOrdering(Values[i].getNode(), SDNodeOrder);
-  }
-
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                            DAG.getVTList(&ValueVTs[0], NumValues),
-                            &Values[0], NumValues);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&ValueVTs[0], NumValues),
+                           &Values[0], NumValues));
 }
 
 void SelectionDAGBuilder::visitTrunc(User &I) {
   // TruncInst cannot be a no-op cast because sizeof(src) > sizeof(dest).
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::TRUNCATE, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitZExt(User &I) {
@@ -2309,9 +2211,7 @@ void SelectionDAGBuilder::visitZExt(User &I) {
   // ZExt also can't be a cast to bool for same reason. So, nothing much to do
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitSExt(User &I) {
@@ -2319,64 +2219,50 @@ void SelectionDAGBuilder::visitSExt(User &I) {
   // SExt also can't be a cast to bool for same reason. So, nothing much to do
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::SIGN_EXTEND, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitFPTrunc(User &I) {
   // FPTrunc is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
-                            DestVT, N, DAG.getIntPtrConstant(0));
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::FP_ROUND, getCurDebugLoc(),
+                           DestVT, N, DAG.getIntPtrConstant(0)));
 }
 
 void SelectionDAGBuilder::visitFPExt(User &I){
   // FPTrunc is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::FP_EXTEND, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitFPToUI(User &I) {
   // FPToUI is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::FP_TO_UINT, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitFPToSI(User &I) {
   // FPToSI is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::FP_TO_SINT, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitUIToFP(User &I) {
   // UIToFP is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::UINT_TO_FP, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitSIToFP(User &I){
   // SIToFP is never a no-op cast, no need to check
   SDValue N = getValue(I.getOperand(0));
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::SINT_TO_FP, getCurDebugLoc(), DestVT, N));
 }
 
 void SelectionDAGBuilder::visitPtrToInt(User &I) {
@@ -2385,9 +2271,7 @@ void SelectionDAGBuilder::visitPtrToInt(User &I) {
   SDValue N = getValue(I.getOperand(0));
   EVT SrcVT = N.getValueType();
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT));
 }
 
 void SelectionDAGBuilder::visitIntToPtr(User &I) {
@@ -2396,9 +2280,7 @@ void SelectionDAGBuilder::visitIntToPtr(User &I) {
   SDValue N = getValue(I.getOperand(0));
   EVT SrcVT = N.getValueType();
   EVT DestVT = TLI.getValueType(I.getType());
-  SDValue Res = DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getZExtOrTrunc(N, getCurDebugLoc(), DestVT));
 }
 
 void SelectionDAGBuilder::visitBitCast(User &I) {
@@ -2407,14 +2289,11 @@ void SelectionDAGBuilder::visitBitCast(User &I) {
 
   // BitCast assures us that source and destination are the same size so this is
   // either a BIT_CONVERT or a no-op.
-  if (DestVT != N.getValueType()) {
-    SDValue Res = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
-                              DestVT, N); // convert types.
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
-  } else {
+  if (DestVT != N.getValueType())
+    setValue(&I, DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(),
+                             DestVT, N)); // convert types.
+  else
     setValue(&I, N);            // noop cast.
-  }
 }
 
 void SelectionDAGBuilder::visitInsertElement(User &I) {
@@ -2423,13 +2302,9 @@ void SelectionDAGBuilder::visitInsertElement(User &I) {
   SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
                               TLI.getPointerTy(),
                               getValue(I.getOperand(2)));
-  SDValue Res = DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
-                            TLI.getValueType(I.getType()),
-                            InVec, InVal, InIdx);
-  setValue(&I, Res);
-
-  DAG.AssignOrdering(InIdx.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::INSERT_VECTOR_ELT, getCurDebugLoc(),
+                           TLI.getValueType(I.getType()),
+                           InVec, InVal, InIdx));
 }
 
 void SelectionDAGBuilder::visitExtractElement(User &I) {
@@ -2437,15 +2312,10 @@ void SelectionDAGBuilder::visitExtractElement(User &I) {
   SDValue InIdx = DAG.getNode(ISD::ZERO_EXTEND, getCurDebugLoc(),
                               TLI.getPointerTy(),
                               getValue(I.getOperand(1)));
-  SDValue Res = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
-                            TLI.getValueType(I.getType()), InVec, InIdx);
-  setValue(&I, Res);
-
-  DAG.AssignOrdering(InIdx.getNode(), SDNodeOrder);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::EXTRACT_VECTOR_ELT, getCurDebugLoc(),
+                           TLI.getValueType(I.getType()), InVec, InIdx));
 }
 
-
 // Utility for visitShuffleVector - Returns true if the mask is mask starting
 // from SIndx and increasing to the element length (undefs are allowed).
 static bool SequentialMask(SmallVectorImpl<int> &Mask, unsigned SIndx) {
@@ -2464,8 +2334,7 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
   // Convert the ConstantVector mask operand into an array of ints, with -1
   // representing undef values.
   SmallVector<Constant*, 8> MaskElts;
-  cast<Constant>(I.getOperand(2))->getVectorElements(*DAG.getContext(),
-                                                     MaskElts);
+  cast<Constant>(I.getOperand(2))->getVectorElements(MaskElts);
   unsigned MaskNumElts = MaskElts.size();
   for (unsigned i = 0; i != MaskNumElts; ++i) {
     if (isa<UndefValue>(MaskElts[i]))
@@ -2479,10 +2348,8 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
   unsigned SrcNumElts = SrcVT.getVectorNumElements();
 
   if (SrcNumElts == MaskNumElts) {
-    SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                       &Mask[0]);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                      &Mask[0]));
     return;
   }
 
@@ -2493,10 +2360,8 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
     // lengths match.
     if (SrcNumElts*2 == MaskNumElts && SequentialMask(Mask, 0)) {
       // The shuffle is concatenating two vectors together.
-      SDValue Res = DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
-                                VT, Src1, Src2);
-      setValue(&I, Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      setValue(&I, DAG.getNode(ISD::CONCAT_VECTORS, getCurDebugLoc(),
+                               VT, Src1, Src2));
       return;
     }
 
@@ -2528,12 +2393,8 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
         MappedOps.push_back(Idx + MaskNumElts - SrcNumElts);
     }
 
-    SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                       &MappedOps[0]);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Src1.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Src2.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                      &MappedOps[0]));
     return;
   }
 
@@ -2585,9 +2446,7 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
     }
 
     if (RangeUse[0] == 0 && RangeUse[1] == 0) {
-      SDValue Res = DAG.getUNDEF(VT);
-      setValue(&I, Res);  // Vectors are not used.
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      setValue(&I, DAG.getUNDEF(VT)); // Vectors are not used.
       return;
     }
     else if (RangeUse[0] < 2 && RangeUse[1] < 2) {
@@ -2599,8 +2458,6 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
         else
           Src = DAG.getNode(ISD::EXTRACT_SUBVECTOR, getCurDebugLoc(), VT,
                             Src, DAG.getIntPtrConstant(StartIdx[Input]));
-
-        DAG.AssignOrdering(Src.getNode(), SDNodeOrder);
       }
 
       // Calculate new mask.
@@ -2615,10 +2472,8 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
           MappedOps.push_back(Idx - SrcNumElts - StartIdx[1] + MaskNumElts);
       }
 
-      SDValue Res = DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
-                                         &MappedOps[0]);
-      setValue(&I, Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      setValue(&I, DAG.getVectorShuffle(VT, getCurDebugLoc(), Src1, Src2,
+                                        &MappedOps[0]));
       return;
     }
   }
@@ -2645,14 +2500,11 @@ void SelectionDAGBuilder::visitShuffleVector(User &I) {
                           DAG.getConstant(Idx - SrcNumElts, PtrVT));
 
       Ops.push_back(Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     }
   }
 
-  SDValue Res = DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
-                            VT, &Ops[0], Ops.size());
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::BUILD_VECTOR, getCurDebugLoc(),
+                           VT, &Ops[0], Ops.size()));
 }
 
 void SelectionDAGBuilder::visitInsertValue(InsertValueInst &I) {
@@ -2691,11 +2543,9 @@ void SelectionDAGBuilder::visitInsertValue(InsertValueInst &I) {
     Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
                 SDValue(Agg.getNode(), Agg.getResNo() + i);
 
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                            DAG.getVTList(&AggValueVTs[0], NumAggValues),
-                            &Values[0], NumAggValues);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&AggValueVTs[0], NumAggValues),
+                           &Values[0], NumAggValues));
 }
 
 void SelectionDAGBuilder::visitExtractValue(ExtractValueInst &I) {
@@ -2721,11 +2571,9 @@ void SelectionDAGBuilder::visitExtractValue(ExtractValueInst &I) {
         DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i)) :
         SDValue(Agg.getNode(), Agg.getResNo() + i);
 
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                            DAG.getVTList(&ValValueVTs[0], NumValValues),
-                            &Values[0], NumValValues);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&ValValueVTs[0], NumValValues),
+                           &Values[0], NumValValues));
 }
 
 void SelectionDAGBuilder::visitGetElementPtr(User &I) {
@@ -2742,7 +2590,6 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
         uint64_t Offset = TD->getStructLayout(StTy)->getElementOffset(Field);
         N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
                         DAG.getIntPtrConstant(Offset));
-        DAG.AssignOrdering(N.getNode(), SDNodeOrder);
       }
 
       Ty = StTy->getElementType(Field);
@@ -2766,9 +2613,6 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
 
         N = DAG.getNode(ISD::ADD, getCurDebugLoc(), N.getValueType(), N,
                         OffsVal);
-
-        DAG.AssignOrdering(OffsVal.getNode(), SDNodeOrder);
-        DAG.AssignOrdering(N.getNode(), SDNodeOrder);
         continue;
       }
 
@@ -2794,13 +2638,10 @@ void SelectionDAGBuilder::visitGetElementPtr(User &I) {
           IdxN = DAG.getNode(ISD::MUL, getCurDebugLoc(),
                              N.getValueType(), IdxN, Scale);
         }
-
-        DAG.AssignOrdering(IdxN.getNode(), SDNodeOrder);
       }
 
       N = DAG.getNode(ISD::ADD, getCurDebugLoc(),
                       N.getValueType(), N, IdxN);
-      DAG.AssignOrdering(N.getNode(), SDNodeOrder);
     }
   }
 
@@ -2825,11 +2666,8 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
                           AllocSize,
                           DAG.getConstant(TySize, AllocSize.getValueType()));
 
-  DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
-
   EVT IntPtr = TLI.getPointerTy();
   AllocSize = DAG.getZExtOrTrunc(AllocSize, getCurDebugLoc(), IntPtr);
-  DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
 
   // Handle alignment.  If the requested alignment is less than or equal to
   // the stack alignment, ignore it.  If the size is greater than or equal to
@@ -2844,13 +2682,11 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
   AllocSize = DAG.getNode(ISD::ADD, getCurDebugLoc(),
                           AllocSize.getValueType(), AllocSize,
                           DAG.getIntPtrConstant(StackAlign-1));
-  DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
 
   // Mask out the low bits for alignment purposes.
   AllocSize = DAG.getNode(ISD::AND, getCurDebugLoc(),
                           AllocSize.getValueType(), AllocSize,
                           DAG.getIntPtrConstant(~(uint64_t)(StackAlign-1)));
-  DAG.AssignOrdering(AllocSize.getNode(), SDNodeOrder);
 
   SDValue Ops[] = { getRoot(), AllocSize, DAG.getIntPtrConstant(Align) };
   SDVTList VTs = DAG.getVTList(AllocSize.getValueType(), MVT::Other);
@@ -2858,7 +2694,6 @@ void SelectionDAGBuilder::visitAlloca(AllocaInst &I) {
                             VTs, Ops, 3);
   setValue(&I, DSA);
   DAG.setRoot(DSA.getValue(1));
-  DAG.AssignOrdering(DSA.getNode(), SDNodeOrder);
 
   // Inform the Frame Information that we have just allocated a variable-sized
   // object.
@@ -2906,9 +2741,6 @@ void SelectionDAGBuilder::visitLoad(LoadInst &I) {
 
     Values[i] = L;
     Chains[i] = L.getValue(1);
-
-    DAG.AssignOrdering(A.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(L.getNode(), SDNodeOrder);
   }
 
   if (!ConstantMemory) {
@@ -2918,15 +2750,11 @@ void SelectionDAGBuilder::visitLoad(LoadInst &I) {
       DAG.setRoot(Chain);
     else
       PendingLoads.push_back(Chain);
-
-    DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
   }
 
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                            DAG.getVTList(&ValueVTs[0], NumValues),
-                            &Values[0], NumValues);
-  setValue(&I, Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                           DAG.getVTList(&ValueVTs[0], NumValues),
+                           &Values[0], NumValues));
 }
 
 void SelectionDAGBuilder::visitStore(StoreInst &I) {
@@ -2958,15 +2786,10 @@ void SelectionDAGBuilder::visitStore(StoreInst &I) {
     Chains[i] = DAG.getStore(Root, getCurDebugLoc(),
                              SDValue(Src.getNode(), Src.getResNo() + i),
                              Add, PtrV, Offsets[i], isVolatile, Alignment);
-
-    DAG.AssignOrdering(Add.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Chains[i].getNode(), SDNodeOrder);
   }
 
-  SDValue Res = DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
-                            MVT::Other, &Chains[0], NumValues);
-  DAG.setRoot(Res);
-  DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+  DAG.setRoot(DAG.getNode(ISD::TokenFactor, getCurDebugLoc(),
+                          MVT::Other, &Chains[0], NumValues));
 }
 
 /// visitTargetIntrinsic - Lower a call of a target intrinsic to an INTRINSIC
@@ -3037,8 +2860,6 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
                          VTs, &Ops[0], Ops.size());
   }
 
-  DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
-
   if (HasChain) {
     SDValue Chain = Result.getValue(Result.getNode()->getNumValues()-1);
     if (OnlyLoad)
@@ -3051,7 +2872,6 @@ void SelectionDAGBuilder::visitTargetIntrinsic(CallInst &I,
     if (const VectorType *PTy = dyn_cast<VectorType>(I.getType())) {
       EVT VT = TLI.getValueType(PTy);
       Result = DAG.getNode(ISD::BIT_CONVERT, getCurDebugLoc(), VT, Result);
-      DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
     }
 
     setValue(&I, Result);
@@ -3070,12 +2890,7 @@ GetSignificand(SelectionDAG &DAG, SDValue Op, DebugLoc dl, unsigned Order) {
                            DAG.getConstant(0x007fffff, MVT::i32));
   SDValue t2 = DAG.getNode(ISD::OR, dl, MVT::i32, t1,
                            DAG.getConstant(0x3f800000, MVT::i32));
-  SDValue Res = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
-
-  DAG.AssignOrdering(t1.getNode(), Order);
-  DAG.AssignOrdering(t2.getNode(), Order);
-  DAG.AssignOrdering(Res.getNode(), Order);
-  return Res;
+  return DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t2);
 }
 
 /// GetExponent - Get the exponent:
@@ -3092,13 +2907,7 @@ GetExponent(SelectionDAG &DAG, SDValue Op, const TargetLowering &TLI,
                            DAG.getConstant(23, TLI.getPointerTy()));
   SDValue t2 = DAG.getNode(ISD::SUB, dl, MVT::i32, t1,
                            DAG.getConstant(127, MVT::i32));
-  SDValue Res = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
-
-  DAG.AssignOrdering(t0.getNode(), Order);
-  DAG.AssignOrdering(t1.getNode(), Order);
-  DAG.AssignOrdering(t2.getNode(), Order);
-  DAG.AssignOrdering(Res.getNode(), Order);
-  return Res;
+  return DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, t2);
 }
 
 /// getF32Constant - Get 32-bit floating point constant.
@@ -3122,7 +2931,6 @@ SelectionDAGBuilder::implVisitBinaryAtomic(CallInst& I, ISD::NodeType Op) {
                   I.getOperand(1));
   setValue(&I, L);
   DAG.setRoot(L.getValue(1));
-  DAG.AssignOrdering(L.getNode(), SDNodeOrder);
   return 0;
 }
 
@@ -3133,10 +2941,7 @@ SelectionDAGBuilder::implVisitAluOverflow(CallInst &I, ISD::NodeType Op) {
   SDValue Op2 = getValue(I.getOperand(2));
 
   SDVTList VTs = DAG.getVTList(Op1.getValueType(), MVT::i1);
-  SDValue Result = DAG.getNode(Op, getCurDebugLoc(), VTs, Op1, Op2);
-
-  setValue(&I, Result);
-  DAG.AssignOrdering(Result.getNode(), SDNodeOrder);
+  setValue(&I, DAG.getNode(Op, getCurDebugLoc(), VTs, Op1, Op2));
   return 0;
 }
 
@@ -3164,15 +2969,9 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
 
-    DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(X.getNode(), SDNodeOrder);
-
     //   IntegerPartOfX <<= 23;
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
 
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
@@ -3196,14 +2995,6 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t6);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3230,16 +3021,6 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t8);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3279,29 +3060,12 @@ SelectionDAGBuilder::visitExp(CallInst &I) {
                                 TwoToFracPartOfX, IntegerPartOfX);
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::f32, t14);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFracPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FEXP, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3319,15 +3083,11 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
-    DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
-
     // Scale the exponent by log(2) [0.69314718f].
     SDValue Exp = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
     SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
                                         getF32Constant(DAG, 0x3f317218));
 
-    DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
-
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
     SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
@@ -3350,12 +3110,6 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3382,16 +3136,6 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3426,27 +3170,12 @@ SelectionDAGBuilder::visitLog(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, LogOfMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(LogOfMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3464,13 +3193,9 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
-    DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
-
     // Get the exponent.
     SDValue LogOfExponent = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
 
-    DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
-
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
     SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
@@ -3493,12 +3218,6 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3525,16 +3244,6 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3570,27 +3279,12 @@ SelectionDAGBuilder::visitLog2(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log2ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log2ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG2, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3608,15 +3302,11 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
     SDValue Op = getValue(I.getOperand(1));
     SDValue Op1 = DAG.getNode(ISD::BIT_CONVERT, dl, MVT::i32, Op);
 
-    DAG.AssignOrdering(Op1.getNode(), SDNodeOrder);
-
     // Scale the exponent by log10(2) [0.30102999f].
     SDValue Exp = GetExponent(DAG, Op1, TLI, dl, SDNodeOrder);
     SDValue LogOfExponent = DAG.getNode(ISD::FMUL, dl, MVT::f32, Exp,
                                         getF32Constant(DAG, 0x3e9a209a));
 
-    DAG.AssignOrdering(LogOfExponent.getNode(), SDNodeOrder);
-
     // Get the significand and build it into a floating-point number with
     // exponent of 1.
     SDValue X = GetSignificand(DAG, Op1, dl, SDNodeOrder);
@@ -3639,12 +3329,6 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3667,14 +3351,6 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3705,25 +3381,12 @@ SelectionDAGBuilder::visitLog10(CallInst &I) {
 
       result = DAG.getNode(ISD::FADD, dl,
                            MVT::f32, LogOfExponent, Log10ofMantissa);
-
-      DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(Log10ofMantissa.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FLOG10, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3742,8 +3405,6 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
     SDValue IntegerPartOfX = DAG.getNode(ISD::FP_TO_SINT, dl, MVT::i32, Op);
 
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
-
     //   FractionalPartOfX = x - (float)IntegerPartOfX;
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, Op, t1);
@@ -3752,10 +3413,6 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
 
-    DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(X.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
-
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
       //
@@ -3777,14 +3434,6 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3810,16 +3459,6 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -3856,29 +3495,12 @@ SelectionDAGBuilder::visitExp2(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
     result = DAG.getNode(ISD::FEXP2, dl,
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -3920,17 +3542,10 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
     SDValue t1 = DAG.getNode(ISD::SINT_TO_FP, dl, MVT::f32, IntegerPartOfX);
     SDValue X = DAG.getNode(ISD::FSUB, dl, MVT::f32, t0, t1);
 
-    DAG.AssignOrdering(t0.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(t1.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(X.getNode(), SDNodeOrder);
-
     //   IntegerPartOfX <<= 23;
     IntegerPartOfX = DAG.getNode(ISD::SHL, dl, MVT::i32, IntegerPartOfX,
                                  DAG.getConstant(23, TLI.getPointerTy()));
 
-    DAG.AssignOrdering(IntegerPartOfX.getNode(), SDNodeOrder);
-
     if (LimitFloatPrecision <= 6) {
       // For floating-point precision of 6:
       //
@@ -3952,14 +3567,6 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else if (LimitFloatPrecision > 6 && LimitFloatPrecision <= 12) {
       // For floating-point precision of 12:
       //
@@ -3985,16 +3592,6 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     } else { // LimitFloatPrecision > 12 && LimitFloatPrecision <= 18
       // For floating-point precision of 18:
       //
@@ -4031,22 +3628,6 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
 
       result = DAG.getNode(ISD::BIT_CONVERT, dl,
                            MVT::f32, TwoToFractionalPartOfX);
-
-      DAG.AssignOrdering(t2.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t3.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t4.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t5.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t6.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t7.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t8.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t9.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t10.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t11.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t12.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t13.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(t14.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(TwoToFractionalPartOfX.getNode(), SDNodeOrder);
-      DAG.AssignOrdering(result.getNode(), SDNodeOrder);
     }
   } else {
     // No special expansion.
@@ -4054,7 +3635,6 @@ SelectionDAGBuilder::visitPow(CallInst &I) {
                          getValue(I.getOperand(1)).getValueType(),
                          getValue(I.getOperand(1)),
                          getValue(I.getOperand(2)));
-    DAG.AssignOrdering(result.getNode(), SDNodeOrder);
   }
 
   setValue(&I, result);
@@ -4131,16 +3711,12 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::vaend:    visitVAEnd(I); return 0;
   case Intrinsic::vacopy:   visitVACopy(I); return 0;
   case Intrinsic::returnaddress:
-    Res = DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::RETURNADDR, dl, TLI.getPointerTy(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::frameaddress:
-    Res = DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::FRAMEADDR, dl, TLI.getPointerTy(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::setjmp:
     return "_setjmp"+!TLI.usesUnderscoreSetJmp();
@@ -4151,10 +3727,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Op2 = getValue(I.getOperand(2));
     SDValue Op3 = getValue(I.getOperand(3));
     unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    Res = DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                        I.getOperand(1), 0, I.getOperand(2), 0);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                              I.getOperand(1), 0, I.getOperand(2), 0));
     return 0;
   }
   case Intrinsic::memset: {
@@ -4162,10 +3736,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Op2 = getValue(I.getOperand(2));
     SDValue Op3 = getValue(I.getOperand(3));
     unsigned Align = cast<ConstantInt>(I.getOperand(4))->getZExtValue();
-    Res = DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
-                        I.getOperand(1), 0);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getMemset(getRoot(), dl, Op1, Op2, Op3, Align,
+                              I.getOperand(1), 0));
     return 0;
   }
   case Intrinsic::memmove: {
@@ -4181,20 +3753,18 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
       Size = C->getZExtValue();
     if (AA->alias(I.getOperand(1), Size, I.getOperand(2), Size) ==
         AliasAnalysis::NoAlias) {
-      Res = DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
-                          I.getOperand(1), 0, I.getOperand(2), 0);
-      DAG.setRoot(Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      DAG.setRoot(DAG.getMemcpy(getRoot(), dl, Op1, Op2, Op3, Align, false,
+                                I.getOperand(1), 0, I.getOperand(2), 0));
       return 0;
     }
 
-    Res = DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
-                         I.getOperand(1), 0, I.getOperand(2), 0);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getMemmove(getRoot(), dl, Op1, Op2, Op3, Align,
+                               I.getOperand(1), 0, I.getOperand(2), 0));
     return 0;
   }
   case Intrinsic::dbg_declare: {
+    // FIXME: currently, we get here only if OptLevel != CodeGenOpt::None.
+    // The real handling of this intrinsic is in FastISel.
     if (OptLevel != CodeGenOpt::None)
       // FIXME: Variable debug info is not supported here.
       return 0;
@@ -4207,6 +3777,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
 
     MDNode *Variable = DI.getVariable();
     Value *Address = DI.getAddress();
+    if (!Address)
+      return 0;
     if (BitCastInst *BCI = dyn_cast<BitCastInst>(Address))
       Address = BCI->getOperand(0);
     AllocaInst *AI = dyn_cast<AllocaInst>(Address);
@@ -4224,6 +3796,39 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
         MMI->setVariableDbgInfo(Variable, FI, Dbg);
     return 0;
   }
+  case Intrinsic::dbg_value: {
+    // FIXME: currently, we get here only if OptLevel != CodeGenOpt::None.
+    // The real handling of this intrinsic is in FastISel.
+    if (OptLevel != CodeGenOpt::None)
+      // FIXME: Variable debug info is not supported here.
+      return 0;
+    DwarfWriter *DW = DAG.getDwarfWriter();
+    if (!DW)
+      return 0;
+    DbgValueInst &DI = cast<DbgValueInst>(I);
+    if (!DIDescriptor::ValidDebugInfo(DI.getVariable(), CodeGenOpt::None))
+      return 0;
+
+    MDNode *Variable = DI.getVariable();
+    Value *V = DI.getValue();
+    if (!V)
+      return 0;
+    if (BitCastInst *BCI = dyn_cast<BitCastInst>(V))
+      V = BCI->getOperand(0);
+    AllocaInst *AI = dyn_cast<AllocaInst>(V);
+    // Don't handle byval struct arguments or VLAs, for example.
+    if (!AI)
+      return 0;
+    DenseMap<const AllocaInst*, int>::iterator SI =
+      FuncInfo.StaticAllocaMap.find(AI);
+    if (SI == FuncInfo.StaticAllocaMap.end())
+      return 0; // VLAs.
+    int FI = SI->second;
+    if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo())
+      if (MDNode *Dbg = DI.getMetadata("dbg"))
+        MMI->setVariableDbgInfo(Variable, FI, Dbg);
+    return 0;
+  }
   case Intrinsic::eh_exception: {
     // Insert the EXCEPTIONADDR instruction.
     assert(CurMBB->isLandingPad() &&"Call to eh.exception not in landing pad!");
@@ -4233,7 +3838,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue Op = DAG.getNode(ISD::EXCEPTIONADDR, dl, VTs, Ops, 1);
     setValue(&I, Op);
     DAG.setRoot(Op.getValue(1));
-    DAG.AssignOrdering(Op.getNode(), SDNodeOrder);
     return 0;
   }
 
@@ -4257,13 +3861,8 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     Ops[0] = getValue(I.getOperand(1));
     Ops[1] = getRoot();
     SDValue Op = DAG.getNode(ISD::EHSELECTION, dl, VTs, Ops, 2);
-
     DAG.setRoot(Op.getValue(1));
-
-    Res = DAG.getSExtOrTrunc(Op, dl, MVT::i32);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Op.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getSExtOrTrunc(Op, dl, MVT::i32));
     return 0;
   }
 
@@ -4281,7 +3880,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     }
 
     setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
 
@@ -4289,13 +3887,11 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::eh_return_i64:
     if (MachineModuleInfo *MMI = DAG.getMachineModuleInfo()) {
       MMI->setCallsEHReturn(true);
-      Res = DAG.getNode(ISD::EH_RETURN, dl,
-                        MVT::Other,
-                        getControlRoot(),
-                        getValue(I.getOperand(1)),
-                        getValue(I.getOperand(2)));
-      DAG.setRoot(Res);
-      DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+      DAG.setRoot(DAG.getNode(ISD::EH_RETURN, dl,
+                              MVT::Other,
+                              getControlRoot(),
+                              getValue(I.getOperand(1)),
+                              getValue(I.getOperand(2))));
     } else {
       setValue(&I, DAG.getConstant(0, TLI.getPointerTy()));
     }
@@ -4318,15 +3914,20 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     SDValue FA = DAG.getNode(ISD::FRAMEADDR, dl,
                              TLI.getPointerTy(),
                              DAG.getConstant(0, TLI.getPointerTy()));
-    Res = DAG.getNode(ISD::ADD, dl, TLI.getPointerTy(),
-                      FA, Offset);
-    setValue(&I, Res);
-    DAG.AssignOrdering(CfaArg.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Offset.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(FA.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::ADD, dl, TLI.getPointerTy(),
+                             FA, Offset));
+    return 0;
+  }
+  case Intrinsic::eh_sjlj_callsite: {
+    MachineModuleInfo *MMI = DAG.getMachineModuleInfo();
+    ConstantInt *CI = dyn_cast<ConstantInt>(I.getOperand(1));
+    assert(CI && "Non-constant call site value in eh.sjlj.callsite!");
+    assert(MMI->getCurrentCallSite() == 0 && "Overlapping call sites!");
+
+    MMI->setCurrentCallSite(CI->getZExtValue());
     return 0;
   }
+
   case Intrinsic::convertff:
   case Intrinsic::convertfsi:
   case Intrinsic::convertfui:
@@ -4357,35 +3958,26 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                                getValue(I.getOperand(3)),
                                Code);
     setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::sqrt:
-    Res = DAG.getNode(ISD::FSQRT, dl,
-                      getValue(I.getOperand(1)).getValueType(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::FSQRT, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::powi:
-    Res = ExpandPowI(dl, getValue(I.getOperand(1)), getValue(I.getOperand(2)),
-                     DAG);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, ExpandPowI(dl, getValue(I.getOperand(1)),
+                            getValue(I.getOperand(2)), DAG));
     return 0;
   case Intrinsic::sin:
-    Res = DAG.getNode(ISD::FSIN, dl,
-                      getValue(I.getOperand(1)).getValueType(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::FSIN, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::cos:
-    Res = DAG.getNode(ISD::FCOS, dl,
-                      getValue(I.getOperand(1)).getValueType(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::FCOS, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::log:
     visitLog(I);
@@ -4407,9 +3999,7 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     return 0;
   case Intrinsic::pcmarker: {
     SDValue Tmp = getValue(I.getOperand(1));
-    Res = DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getNode(ISD::PCMARKER, dl, MVT::Other, getRoot(), Tmp));
     return 0;
   }
   case Intrinsic::readcyclecounter: {
@@ -4419,38 +4009,29 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                       &Op, 1);
     setValue(&I, Res);
     DAG.setRoot(Res.getValue(1));
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::bswap:
-    Res = DAG.getNode(ISD::BSWAP, dl,
-                      getValue(I.getOperand(1)).getValueType(),
-                      getValue(I.getOperand(1)));
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::BSWAP, dl,
+                             getValue(I.getOperand(1)).getValueType(),
+                             getValue(I.getOperand(1))));
     return 0;
   case Intrinsic::cttz: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    Res = DAG.getNode(ISD::CTTZ, dl, Ty, Arg);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::CTTZ, dl, Ty, Arg));
     return 0;
   }
   case Intrinsic::ctlz: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    Res = DAG.getNode(ISD::CTLZ, dl, Ty, Arg);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::CTLZ, dl, Ty, Arg));
     return 0;
   }
   case Intrinsic::ctpop: {
     SDValue Arg = getValue(I.getOperand(1));
     EVT Ty = Arg.getValueType();
-    Res = DAG.getNode(ISD::CTPOP, dl, Ty, Arg);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::CTPOP, dl, Ty, Arg));
     return 0;
   }
   case Intrinsic::stacksave: {
@@ -4459,14 +4040,11 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                       DAG.getVTList(TLI.getPointerTy(), MVT::Other), &Op, 1);
     setValue(&I, Res);
     DAG.setRoot(Res.getValue(1));
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::stackrestore: {
     Res = getValue(I.getOperand(1));
-    Res = DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Res);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getNode(ISD::STACKRESTORE, dl, MVT::Other, getRoot(), Res));
     return 0;
   }
   case Intrinsic::stackprotector: {
@@ -4489,7 +4067,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                        0, true);
     setValue(&I, Res);
     DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::objectsize: {
@@ -4507,7 +4084,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
       Res = DAG.getConstant(0, Ty);
 
     setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::var_annotation:
@@ -4531,7 +4107,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
 
     setValue(&I, Res);
     DAG.setRoot(Res.getValue(1));
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::gcroot:
@@ -4548,14 +4123,10 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     llvm_unreachable("GC failed to lower gcread/gcwrite intrinsics!");
     return 0;
   case Intrinsic::flt_rounds:
-    Res = DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32);
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getNode(ISD::FLT_ROUNDS_, dl, MVT::i32));
     return 0;
   case Intrinsic::trap:
-    Res = DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot());
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getNode(ISD::TRAP, dl,MVT::Other, getRoot()));
     return 0;
   case Intrinsic::uadd_with_overflow:
     return implVisitAluOverflow(I, ISD::UADDO);
@@ -4576,9 +4147,7 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     Ops[1] = getValue(I.getOperand(1));
     Ops[2] = getValue(I.getOperand(2));
     Ops[3] = getValue(I.getOperand(3));
-    Res = DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getNode(ISD::PREFETCH, dl, MVT::Other, &Ops[0], 4));
     return 0;
   }
 
@@ -4588,9 +4157,7 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
     for (int x = 1; x < 6; ++x)
       Ops[x] = getValue(I.getOperand(x));
 
-    Res = DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6);
-    DAG.setRoot(Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    DAG.setRoot(DAG.getNode(ISD::MEMBARRIER, dl, MVT::Other, &Ops[0], 6));
     return 0;
   }
   case Intrinsic::atomic_cmp_swap: {
@@ -4605,7 +4172,6 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
                     I.getOperand(1));
     setValue(&I, L);
     DAG.setRoot(L.getValue(1));
-    DAG.AssignOrdering(L.getNode(), SDNodeOrder);
     return 0;
   }
   case Intrinsic::atomic_load_add:
@@ -4634,9 +4200,7 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
   case Intrinsic::invariant_start:
   case Intrinsic::lifetime_start:
     // Discard region information.
-    Res = DAG.getUNDEF(TLI.getPointerTy());
-    setValue(&I, Res);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
+    setValue(&I, DAG.getUNDEF(TLI.getPointerTy()));
     return 0;
   case Intrinsic::invariant_end:
   case Intrinsic::lifetime_end:
@@ -4651,19 +4215,25 @@ SelectionDAGBuilder::visitIntrinsicCall(CallInst &I, unsigned Intrinsic) {
 /// between it and the return.
 ///
 /// This function only tests target-independent requirements.
-/// For target-dependent requirements, a target should override
-/// TargetLowering::IsEligibleForTailCallOptimization.
-///
 static bool
-isInTailCallPosition(const Instruction *I, Attributes CalleeRetAttr,
+isInTailCallPosition(CallSite CS, Attributes CalleeRetAttr,
                      const TargetLowering &TLI) {
+  const Instruction *I = CS.getInstruction();
   const BasicBlock *ExitBB = I->getParent();
   const TerminatorInst *Term = ExitBB->getTerminator();
   const ReturnInst *Ret = dyn_cast<ReturnInst>(Term);
   const Function *F = ExitBB->getParent();
 
-  // The block must end in a return statement or an unreachable.
-  if (!Ret && !isa<UnreachableInst>(Term)) return false;
+  // The block must end in a return statement or unreachable.
+  //
+  // FIXME: Decline tailcall if it's not guaranteed and if the block ends in
+  // an unreachable, for now. The way tailcall optimization is currently
+  // implemented means it will add an epilogue followed by a jump. That is
+  // not profitable. Also, if the callee is a special function (e.g.
+  // longjmp on x86), it can end up causing miscompilation that has not
+  // been fully understood.
+  if (!Ret &&
+      (!GuaranteedTailCallOpt || !isa<UnreachableInst>(Term))) return false;
 
   // If I will have a chain, make sure no other instruction that will have a
   // chain interposes between I and the return.
@@ -4692,6 +4262,10 @@ isInTailCallPosition(const Instruction *I, Attributes CalleeRetAttr,
   if ((CalleeRetAttr ^ CallerRetAttr) & ~Attribute::NoAlias)
     return false;
 
+  // It's not safe to eliminate the sign / zero extension of the return value.
+  if ((CallerRetAttr & Attribute::ZExt) || (CallerRetAttr & Attribute::SExt))
+    return false;
+
   // Otherwise, make sure the unmodified return value of I is the return value.
   for (const Instruction *U = dyn_cast<Instruction>(Ret->getOperand(0)); ;
        U = dyn_cast<Instruction>(U->getOperand(0))) {
@@ -4787,6 +4361,15 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
     // used to detect deletion of the invoke via the MachineModuleInfo.
     BeginLabel = MMI->NextLabelID();
 
+    // For SjLj, keep track of which landing pads go with which invokes
+    // so as to maintain the ordering of pads in the LSDA.
+    unsigned CallSiteIndex = MMI->getCurrentCallSite();
+    if (CallSiteIndex) {
+      MMI->setCallSiteBeginLabel(BeginLabel, CallSiteIndex);
+      // Now that the call site is handled, stop tracking it.
+      MMI->setCurrentCallSite(0);
+    }
+
     // Both PendingLoads and PendingExports must be flushed here;
     // this call might not return.
     (void)getRoot();
@@ -4797,9 +4380,7 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
   // Check if target-independent constraints permit a tail call here.
   // Target-dependent constraints are checked within TLI.LowerCallTo.
   if (isTailCall &&
-      !isInTailCallPosition(CS.getInstruction(),
-                            CS.getAttributes().getRetAttributes(),
-                            TLI))
+      !isInTailCallPosition(CS, CS.getAttributes().getRetAttributes(), TLI))
     isTailCall = false;
 
   std::pair<SDValue,SDValue> Result =
@@ -4817,7 +4398,6 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
          "Null value expected with tail call!");
   if (Result.first.getNode()) {
     setValue(CS.getInstruction(), Result.first);
-    DAG.AssignOrdering(Result.first.getNode(), SDNodeOrder);
   } else if (!CanLowerReturn && Result.second.getNode()) {
     // The instruction result is the result of loading from the
     // hidden sret parameter.
@@ -4862,27 +4442,22 @@ void SelectionDAGBuilder::LowerCallTo(CallSite CS, SDValue Callee,
         getCopyFromParts(DAG, getCurDebugLoc(), SDNodeOrder, &Values[CurReg], NumRegs,
                          RegisterVT, VT, AssertOp);
       ReturnValues.push_back(ReturnValue);
-      DAG.AssignOrdering(ReturnValue.getNode(), SDNodeOrder);
       CurReg += NumRegs;
     }
-    SDValue Res = DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
-                              DAG.getVTList(&RetTys[0], RetTys.size()),
-                              &ReturnValues[0], ReturnValues.size());
 
-    setValue(CS.getInstruction(), Res);
+    setValue(CS.getInstruction(),
+             DAG.getNode(ISD::MERGE_VALUES, getCurDebugLoc(),
+                         DAG.getVTList(&RetTys[0], RetTys.size()),
+                         &ReturnValues[0], ReturnValues.size()));
 
-    DAG.AssignOrdering(Chain.getNode(), SDNodeOrder);
-    DAG.AssignOrdering(Res.getNode(), SDNodeOrder);
   }
 
   // As a special case, a null chain means that a tail call has been emitted and
   // the DAG root is already updated.
-  if (Result.second.getNode()) {
+  if (Result.second.getNode())
     DAG.setRoot(Result.second);
-    DAG.AssignOrdering(Result.second.getNode(), SDNodeOrder);
-  } else {
+  else
     HasTailCall = true;
-  }
 
   if (LandingPad && MMI) {
     // Insert a label at the end of the invoke call to mark the try range.  This
@@ -5122,9 +4697,7 @@ void SelectionDAGBuilder::visitCall(CallInst &I) {
 
   // Check if we can potentially perform a tail call. More detailed checking is
   // be done within LowerCallTo, after more information about the call is known.
-  bool isTailCall = PerformTailCallOpt && I.isTailCall();
-
-  LowerCallTo(&I, Callee, isTailCall);
+  LowerCallTo(&I, Callee, I.isTailCall());
 }
 
 /// getCopyFromRegs - Emit a series of CopyFromReg nodes that copies from
@@ -5154,7 +4727,6 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
       }
 
       Chain = P.getValue(1);
-      DAG.AssignOrdering(P.getNode(), Order);
 
       // If the source register was virtual and if we know something about it,
       // add an assert node.
@@ -5190,11 +4762,9 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
           else if (NumZeroBits >= RegSize-32)
             isSExt = false, FromVT = MVT::i32; // ASSERT ZEXT 32
 
-          if (FromVT != MVT::Other) {
+          if (FromVT != MVT::Other)
             P = DAG.getNode(isSExt ? ISD::AssertSext : ISD::AssertZext, dl,
                             RegisterVT, P, DAG.getValueType(FromVT));
-            DAG.AssignOrdering(P.getNode(), Order);
-          }
         }
       }
 
@@ -5203,16 +4773,13 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG, DebugLoc dl,
 
     Values[Value] = getCopyFromParts(DAG, dl, Order, Parts.begin(),
                                      NumRegs, RegisterVT, ValueVT);
-    DAG.AssignOrdering(Values[Value].getNode(), Order);
     Part += NumRegs;
     Parts.clear();
   }
 
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
-                            DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
-                            &Values[0], ValueVTs.size());
-  DAG.AssignOrdering(Res.getNode(), Order);
-  return Res;
+  return DAG.getNode(ISD::MERGE_VALUES, dl,
+                     DAG.getVTList(&ValueVTs[0], ValueVTs.size()),
+                     &Values[0], ValueVTs.size());
 }
 
 /// getCopyToRegs - Emit a series of CopyToReg nodes that copies the
@@ -5248,7 +4815,6 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
     }
 
     Chains[i] = Part.getValue(0);
-    DAG.AssignOrdering(Part.getNode(), Order);
   }
 
   if (NumRegs == 1 || Flag)
@@ -5265,8 +4831,6 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG, DebugLoc dl,
     Chain = Chains[NumRegs-1];
   else
     Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, &Chains[0], NumRegs);
-
-  DAG.AssignOrdering(Chain.getNode(), Order);
 }
 
 /// AddInlineAsmOperands - Add this value to the specified inlineasm node
@@ -5283,16 +4847,12 @@ void RegsForValue::AddInlineAsmOperands(unsigned Code,
   SDValue Res = DAG.getTargetConstant(Flag, MVT::i32);
   Ops.push_back(Res);
 
-  DAG.AssignOrdering(Res.getNode(), Order);
-
   for (unsigned Value = 0, Reg = 0, e = ValueVTs.size(); Value != e; ++Value) {
     unsigned NumRegs = TLI->getNumRegisters(*DAG.getContext(), ValueVTs[Value]);
     EVT RegisterVT = RegVTs[Value];
     for (unsigned i = 0; i != NumRegs; ++i) {
       assert(Reg < Regs.size() && "Mismatch in # registers expected");
-      SDValue Res = DAG.getRegister(Regs[Reg++], RegisterVT);
-      Ops.push_back(Res);
-      DAG.AssignOrdering(Res.getNode(), Order);
+      Ops.push_back(DAG.getRegister(Regs[Reg++], RegisterVT));
     }
   }
 }
@@ -5311,7 +4871,7 @@ isAllocatableRegister(unsigned Reg, MachineFunction &MF,
     EVT ThisVT = MVT::Other;
 
     const TargetRegisterClass *RC = *RCI;
-    // If none of the the value types for this register class are valid, we
+    // If none of the value types for this register class are valid, we
     // can't use it.  For example, 64-bit reg classes on 32-bit targets.
     for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();
          I != E; ++I) {
@@ -5511,8 +5071,6 @@ GetRegistersForValue(SDISelAsmOperandInfo &OpInfo,
                                          RegVT, OpInfo.CallOperand);
         OpInfo.ConstraintVT = RegVT;
       }
-
-      DAG.AssignOrdering(OpInfo.CallOperand.getNode(), SDNodeOrder);
     }
 
     NumRegs = TLI.getNumRegisters(Context, OpInfo.ConstraintVT);
@@ -5974,7 +5532,8 @@ void SelectionDAGBuilder::visitInlineAsm(CallSite CS) {
              "Don't know how to handle indirect register inputs yet!");
 
       // Copy the input into the appropriate registers.
-      if (OpInfo.AssignedRegs.Regs.empty()) {
+      if (OpInfo.AssignedRegs.Regs.empty() ||
+          !OpInfo.AssignedRegs.areValueTypesLegal()) {
         llvm_report_error("Couldn't allocate input reg for"
                           " constraint '"+ OpInfo.ConstraintCode +"'!");
       }
@@ -6118,9 +5677,6 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
                             SDValue Callee,
                             ArgListTy &Args, SelectionDAG &DAG, DebugLoc dl,
                             unsigned Order) {
-  assert((!isTailCall || PerformTailCallOpt) &&
-         "isTailCall set when tail-call optimizations are disabled!");
-
   // Handle all of the outgoing arguments.
   SmallVector<ISD::OutputArg, 32> Outs;
   for (unsigned i = 0, e = Args.size(); i != e; ++i) {
@@ -6209,12 +5765,6 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
     }
   }
 
-  // Check if target-dependent constraints permit a tail call here.
-  // Target-independent constraints should be checked by the caller.
-  if (isTailCall &&
-      !IsEligibleForTailCallOptimization(Callee, CallConv, isVarArg, Ins, DAG))
-    isTailCall = false;
-
   SmallVector<SDValue, 4> InVals;
   Chain = LowerCall(Chain, Callee, CallConv, isVarArg, isTailCall,
                     Outs, Ins, dl, DAG, InVals);
@@ -6233,8 +5783,6 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
                  "LowerCall emitted a value with the wrong type!");
         });
 
-  DAG.AssignOrdering(Chain.getNode(), Order);
-
   // For a tail call, the return value is merely live-out and there aren't
   // any nodes in the DAG representing it. Return a special value to
   // indicate that a tail call has been emitted and no more Instructions
@@ -6258,11 +5806,9 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
     EVT RegisterVT = getRegisterType(RetTy->getContext(), VT);
     unsigned NumRegs = getNumRegisters(RetTy->getContext(), VT);
 
-    SDValue ReturnValue =
-      getCopyFromParts(DAG, dl, Order, &InVals[CurReg], NumRegs,
-                       RegisterVT, VT, AssertOp);
-    ReturnValues.push_back(ReturnValue);
-    DAG.AssignOrdering(ReturnValue.getNode(), Order);
+    ReturnValues.push_back(getCopyFromParts(DAG, dl, Order, &InVals[CurReg],
+                                            NumRegs, RegisterVT, VT,
+                                            AssertOp));
     CurReg += NumRegs;
   }
 
@@ -6275,7 +5821,6 @@ TargetLowering::LowerCallTo(SDValue Chain, const Type *RetTy,
   SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
                             DAG.getVTList(&RetTys[0], RetTys.size()),
                             &ReturnValues[0], ReturnValues.size());
-  DAG.AssignOrdering(Res.getNode(), Order);
   return std::make_pair(Res, Chain);
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
index db656e3..bc4b33d 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
@@ -342,6 +342,11 @@ public:
 
   void CopyValueToVirtualRegister(Value *V, unsigned Reg);
 
+  /// AssignOrderingToNode - Assign an ordering to the node. The order is gotten
+  /// from how the code appeared in the source. The ordering is used by the
+  /// scheduler to effectively turn off scheduling.
+  void AssignOrderingToNode(const SDNode *Node);
+
   void visit(Instruction &I);
 
   void visit(unsigned Opcode, User &I);
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index 2bec964..da2e6e4 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -457,6 +457,21 @@ public:
 };
 }
 
+/// TrivialTruncElim - Eliminate some trivial nops that can result from
+/// ShrinkDemandedOps: (trunc (ext n)) -> n.
+static bool TrivialTruncElim(SDValue Op,
+                             TargetLowering::TargetLoweringOpt &TLO) {
+  SDValue N0 = Op.getOperand(0);
+  EVT VT = Op.getValueType();
+  if ((N0.getOpcode() == ISD::ZERO_EXTEND ||
+       N0.getOpcode() == ISD::SIGN_EXTEND ||
+       N0.getOpcode() == ISD::ANY_EXTEND) &&
+      N0.getOperand(0).getValueType() == VT) {
+    return TLO.CombineTo(Op, N0.getOperand(0));
+  }
+  return false;
+}
+
 /// ShrinkDemandedOps - A late transformation pass that shrink expressions
 /// using TargetLowering::TargetLoweringOpt::ShrinkDemandedOp. It converts
 /// x+y to (VT)((SmallVT)x+(SmallVT)y) if the casts are free.
@@ -489,7 +504,9 @@ void SelectionDAGISel::ShrinkDemandedOps() {
       APInt Demanded = APInt::getAllOnesValue(BitWidth);
       APInt KnownZero, KnownOne;
       if (TLI.SimplifyDemandedBits(SDValue(N, 0), Demanded,
-                                   KnownZero, KnownOne, TLO)) {
+                                   KnownZero, KnownOne, TLO) ||
+          (N->getOpcode() == ISD::TRUNCATE &&
+           TrivialTruncElim(SDValue(N, 0), TLO))) {
         // Revisit the node.
         Worklist.erase(std::remove(Worklist.begin(), Worklist.end(), N),
                        Worklist.end());
@@ -801,7 +818,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(Function &Fn,
       // landing pad can thus be detected via the MachineModuleInfo.
       unsigned LabelID = MMI->addLandingPad(BB);
 
-      const TargetInstrDesc &II = TII.get(TargetInstrInfo::EH_LABEL);
+      const TargetInstrDesc &II = TII.get(TargetOpcode::EH_LABEL);
       BuildMI(BB, SDB->getCurDebugLoc(), II).addImm(LabelID);
 
       // Mark exception register as live in.
@@ -953,7 +970,7 @@ SelectionDAGISel::FinishBasicBlock() {
       SDB->BitTestCases.empty()) {
     for (unsigned i = 0, e = SDB->PHINodesToUpdate.size(); i != e; ++i) {
       MachineInstr *PHI = SDB->PHINodesToUpdate[i].first;
-      assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
+      assert(PHI->isPHI() &&
              "This is not a machine PHI node that we are updating!");
       PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[i].second,
                                                 false));
@@ -1000,7 +1017,7 @@ SelectionDAGISel::FinishBasicBlock() {
     for (unsigned pi = 0, pe = SDB->PHINodesToUpdate.size(); pi != pe; ++pi) {
       MachineInstr *PHI = SDB->PHINodesToUpdate[pi].first;
       MachineBasicBlock *PHIBB = PHI->getParent();
-      assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
+      assert(PHI->isPHI() &&
              "This is not a machine PHI node that we are updating!");
       // This is "default" BB. We have two jumps to it. From "header" BB and
       // from last "case" BB.
@@ -1056,7 +1073,7 @@ SelectionDAGISel::FinishBasicBlock() {
     for (unsigned pi = 0, pe = SDB->PHINodesToUpdate.size(); pi != pe; ++pi) {
       MachineInstr *PHI = SDB->PHINodesToUpdate[pi].first;
       MachineBasicBlock *PHIBB = PHI->getParent();
-      assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
+      assert(PHI->isPHI() &&
              "This is not a machine PHI node that we are updating!");
       // "default" BB. We can go there only from header BB.
       if (PHIBB == SDB->JTCases[i].second.Default) {
@@ -1079,7 +1096,7 @@ SelectionDAGISel::FinishBasicBlock() {
   // need to update PHI nodes in that block.
   for (unsigned i = 0, e = SDB->PHINodesToUpdate.size(); i != e; ++i) {
     MachineInstr *PHI = SDB->PHINodesToUpdate[i].first;
-    assert(PHI->getOpcode() == TargetInstrInfo::PHI &&
+    assert(PHI->isPHI() &&
            "This is not a machine PHI node that we are updating!");
     if (BB->isSuccessor(PHI->getParent())) {
       PHI->addOperand(MachineOperand::CreateReg(SDB->PHINodesToUpdate[i].second,
@@ -1116,7 +1133,7 @@ SelectionDAGISel::FinishBasicBlock() {
       // BB may have been removed from the CFG if a branch was constant folded.
       if (ThisBB->isSuccessor(BB)) {
         for (MachineBasicBlock::iterator Phi = BB->begin();
-             Phi != BB->end() && Phi->getOpcode() == TargetInstrInfo::PHI;
+             Phi != BB->end() && Phi->isPHI();
              ++Phi) {
           // This value for this PHI node is recorded in PHINodesToUpdate.
           for (unsigned pn = 0; ; ++pn) {
@@ -1410,15 +1427,14 @@ SDNode *SelectionDAGISel::Select_INLINEASM(SDNode *N) {
 }
 
 SDNode *SelectionDAGISel::Select_UNDEF(SDNode *N) {
-  return CurDAG->SelectNodeTo(N, TargetInstrInfo::IMPLICIT_DEF,
-                              N->getValueType(0));
+  return CurDAG->SelectNodeTo(N, TargetOpcode::IMPLICIT_DEF,N->getValueType(0));
 }
 
 SDNode *SelectionDAGISel::Select_EH_LABEL(SDNode *N) {
   SDValue Chain = N->getOperand(0);
   unsigned C = cast<LabelSDNode>(N)->getLabelID();
   SDValue Tmp = CurDAG->getTargetConstant(C, MVT::i32);
-  return CurDAG->SelectNodeTo(N, TargetInstrInfo::EH_LABEL,
+  return CurDAG->SelectNodeTo(N, TargetOpcode::EH_LABEL,
                               MVT::Other, Tmp, Chain);
 }
 
diff --git a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index f923927..1683d01 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -540,6 +540,24 @@ TargetLowering::~TargetLowering() {
   delete &TLOF;
 }
 
+/// canOpTrap - Returns true if the operation can trap for the value type.
+/// VT must be a legal type.
+bool TargetLowering::canOpTrap(unsigned Op, EVT VT) const {
+  assert(isTypeLegal(VT));
+  switch (Op) {
+  default:
+    return false;
+  case ISD::FDIV:
+  case ISD::FREM:
+  case ISD::SDIV:
+  case ISD::UDIV:
+  case ISD::SREM:
+  case ISD::UREM:
+    return true;
+  }
+}
+
+
 static unsigned getVectorTypeBreakdownMVT(MVT VT, MVT &IntermediateVT,
                                        unsigned &NumIntermediates,
                                        EVT &RegisterVT,
@@ -2366,7 +2384,7 @@ getRegForInlineAsmConstraint(const std::string &Constraint,
        E = RI->regclass_end(); RCI != E; ++RCI) {
     const TargetRegisterClass *RC = *RCI;
     
-    // If none of the the value types for this register class are valid, we 
+    // If none of the value types for this register class are valid, we 
     // can't use it.  For example, 64-bit reg classes on 32-bit targets.
     bool isLegal = false;
     for (TargetRegisterClass::vt_iterator I = RC->vt_begin(), E = RC->vt_end();
diff --git a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
index 27d429b..e7b0cff 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SimpleRegisterCoalescing.cpp
@@ -197,7 +197,7 @@ bool SimpleRegisterCoalescing::AdjustCopiesBackFrom(LiveInterval &IntA,
 
   SlotIndex FillerStart = ValLR->end, FillerEnd = BLR->start;
   // We are about to delete CopyMI, so need to remove it as the 'instruction
-  // that defines this value #'. Update the the valnum with the new defining
+  // that defines this value #'. Update the valnum with the new defining
   // instruction #.
   BValNo->def  = FillerStart;
   BValNo->setCopy(0);
@@ -375,8 +375,9 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
 
   // If some of the uses of IntA.reg is already coalesced away, return false.
   // It's not possible to determine whether it's safe to perform the coalescing.
-  for (MachineRegisterInfo::use_iterator UI = mri_->use_begin(IntA.reg),
-         UE = mri_->use_end(); UI != UE; ++UI) {
+  for (MachineRegisterInfo::use_nodbg_iterator UI = 
+         mri_->use_nodbg_begin(IntA.reg), 
+       UE = mri_->use_nodbg_end(); UI != UE; ++UI) {
     MachineInstr *UseMI = &*UI;
     SlotIndex UseIdx = li_->getInstructionIndex(UseMI);
     LiveInterval::iterator ULR = IntA.FindLiveRangeContaining(UseIdx);
@@ -430,6 +431,12 @@ bool SimpleRegisterCoalescing::RemoveCopyByCommutingDef(LiveInterval &IntA,
     ++UI;
     if (JoinedCopies.count(UseMI))
       continue;
+    if (UseMI->isDebugValue()) {
+      // FIXME These don't have an instruction index.  Not clear we have enough
+      // info to decide whether to do this replacement or not.  For now do it.
+      UseMO.setReg(NewReg);
+      continue;
+    }
     SlotIndex UseIdx = li_->getInstructionIndex(UseMI).getUseIndex();
     LiveInterval::iterator ULR = IntA.FindLiveRangeContaining(UseIdx);
     if (ULR == IntA.end() || ULR->valno != AValNo)
@@ -659,7 +666,7 @@ bool SimpleRegisterCoalescing::ReMaterializeTrivialDef(LiveInterval &SrcInt,
     return false;
   if (TID.getNumDefs() != 1)
     return false;
-  if (DefMI->getOpcode() != TargetInstrInfo::IMPLICIT_DEF) {
+  if (!DefMI->isImplicitDef()) {
     // Make sure the copy destination register class fits the instruction
     // definition register class. The mismatch can happen as a result of earlier
     // extract_subreg, insert_subreg, subreg_to_reg coalescing.
@@ -764,11 +771,16 @@ SimpleRegisterCoalescing::UpdateRegDefsUses(unsigned SrcReg, unsigned DstReg,
     SubIdx = 0;
   }
 
+  // Copy the register use-list before traversing it. We may be adding operands
+  // and invalidating pointers.
+  SmallVector<std::pair<MachineInstr*, unsigned>, 32> reglist;
   for (MachineRegisterInfo::reg_iterator I = mri_->reg_begin(SrcReg),
-         E = mri_->reg_end(); I != E; ) {
-    MachineOperand &O = I.getOperand();
-    MachineInstr *UseMI = &*I;
-    ++I;
+         E = mri_->reg_end(); I != E; ++I)
+    reglist.push_back(std::make_pair(&*I, I.getOperandNo()));
+
+  for (unsigned N=0; N != reglist.size(); ++N) {
+    MachineInstr *UseMI = reglist[N].first;
+    MachineOperand &O = UseMI->getOperand(reglist[N].second);
     unsigned OldSubIdx = O.getSubReg();
     if (DstIsPhys) {
       unsigned UseDstReg = DstReg;
@@ -789,6 +801,19 @@ SimpleRegisterCoalescing::UpdateRegDefsUses(unsigned SrcReg, unsigned DstReg,
 
       O.setReg(UseDstReg);
       O.setSubReg(0);
+      if (OldSubIdx) {
+        // Def and kill of subregister of a virtual register actually defs and
+        // kills the whole register. Add imp-defs and imp-kills as needed.
+        if (O.isDef()) {
+          if(O.isDead())
+            UseMI->addRegisterDead(DstReg, tri_, true);
+          else
+            UseMI->addRegisterDefined(DstReg, tri_);
+        } else if (!O.isUndef() &&
+                   (O.isKill() ||
+                    UseMI->isRegTiedToDefOperand(&O-&UseMI->getOperand(0))))
+          UseMI->addRegisterKilled(DstReg, tri_, true);
+      }
       continue;
     }
 
@@ -1029,8 +1054,9 @@ SimpleRegisterCoalescing::isWinToJoinVRWithSrcPhysReg(MachineInstr *CopyMI,
   unsigned Threshold = allocatableRCRegs_[RC].count() * 2;
   unsigned Length = li_->getApproximateInstructionCount(DstInt);
   if (Length > Threshold &&
-      (((float)std::distance(mri_->use_begin(DstInt.reg),
-                             mri_->use_end()) / Length) < (1.0 / Threshold)))
+      (((float)std::distance(mri_->use_nodbg_begin(DstInt.reg),
+                             mri_->use_nodbg_end()) / Length) < 
+        (1.0 / Threshold)))
     return false;
 
   // If the virtual register live interval extends into a loop, turn down
@@ -1079,15 +1105,16 @@ SimpleRegisterCoalescing::isWinToJoinVRWithDstPhysReg(MachineInstr *CopyMI,
                                                      MachineBasicBlock *CopyMBB,
                                                      LiveInterval &DstInt,
                                                      LiveInterval &SrcInt) {
-  // If the virtual register live interval is long but it has low use desity,
+  // If the virtual register live interval is long but it has low use density,
   // do not join them, instead mark the physical register as its allocation
   // preference.
   const TargetRegisterClass *RC = mri_->getRegClass(SrcInt.reg);
   unsigned Threshold = allocatableRCRegs_[RC].count() * 2;
   unsigned Length = li_->getApproximateInstructionCount(SrcInt);
   if (Length > Threshold &&
-      (((float)std::distance(mri_->use_begin(SrcInt.reg),
-                             mri_->use_end()) / Length) < (1.0 / Threshold)))
+      (((float)std::distance(mri_->use_nodbg_begin(SrcInt.reg),
+                             mri_->use_nodbg_end()) / Length) < 
+          (1.0 / Threshold)))
     return false;
 
   if (SrcInt.empty())
@@ -1139,12 +1166,14 @@ SimpleRegisterCoalescing::isWinToJoinCrossClass(unsigned LargeReg,
   LiveInterval &SmallInt = li_->getInterval(SmallReg);
   unsigned LargeSize = li_->getApproximateInstructionCount(LargeInt);
   unsigned SmallSize = li_->getApproximateInstructionCount(SmallInt);
-  if (SmallSize > Threshold || LargeSize > Threshold)
-    if ((float)std::distance(mri_->use_begin(SmallReg),
-                             mri_->use_end()) / SmallSize <
-        (float)std::distance(mri_->use_begin(LargeReg),
-                             mri_->use_end()) / LargeSize)
+  if (LargeSize > Threshold) {
+    unsigned SmallUses = std::distance(mri_->use_nodbg_begin(SmallReg),
+                                       mri_->use_nodbg_end());
+    unsigned LargeUses = std::distance(mri_->use_nodbg_begin(LargeReg),
+                                       mri_->use_nodbg_end());
+    if (SmallUses*LargeSize < LargeUses*SmallSize)
       return false;
+  }
   return true;
 }
 
@@ -1164,13 +1193,15 @@ SimpleRegisterCoalescing::HasIncompatibleSubRegDefUse(MachineInstr *CopyMI,
   for (MachineRegisterInfo::reg_iterator I = mri_->reg_begin(VirtReg),
          E = mri_->reg_end(); I != E; ++I) {
     MachineOperand &O = I.getOperand();
+    if (O.isDebug())
+      continue;
     MachineInstr *MI = &*I;
     if (MI == CopyMI || JoinedCopies.count(MI))
       continue;
     unsigned SubIdx = O.getSubReg();
     if (SubIdx && !tri_->getSubReg(PhysReg, SubIdx))
       return true;
-    if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+    if (MI->isExtractSubreg()) {
       SubIdx = MI->getOperand(2).getImm();
       if (O.isUse() && !tri_->getSubReg(PhysReg, SubIdx))
         return true;
@@ -1184,8 +1215,7 @@ SimpleRegisterCoalescing::HasIncompatibleSubRegDefUse(MachineInstr *CopyMI,
           return true;
       }
     }
-    if (MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-        MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
+    if (MI->isInsertSubreg() || MI->isSubregToReg()) {
       SubIdx = MI->getOperand(3).getImm();
       if (VirtReg == MI->getOperand(0).getReg()) {
         if (!tri_->getSubReg(PhysReg, SubIdx))
@@ -1296,9 +1326,9 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
   DEBUG(dbgs() << li_->getInstructionIndex(CopyMI) << '\t' << *CopyMI);
 
   unsigned SrcReg, DstReg, SrcSubIdx = 0, DstSubIdx = 0;
-  bool isExtSubReg = CopyMI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG;
-  bool isInsSubReg = CopyMI->getOpcode() == TargetInstrInfo::INSERT_SUBREG;
-  bool isSubRegToReg = CopyMI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG;
+  bool isExtSubReg = CopyMI->isExtractSubreg();
+  bool isInsSubReg = CopyMI->isInsertSubreg();
+  bool isSubRegToReg = CopyMI->isSubregToReg();
   unsigned SubIdx = 0;
   if (isExtSubReg) {
     DstReg    = CopyMI->getOperand(0).getReg();
@@ -1551,7 +1581,10 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
         (isExtSubReg || DstRC->isASubClass()) &&
         !isWinToJoinCrossClass(LargeReg, SmallReg,
                                allocatableRCRegs_[NewRC].count())) {
-      DEBUG(dbgs() << "\tSrc/Dest are different register classes.\n");
+      DEBUG(dbgs() << "\tSrc/Dest are different register classes: "
+                   << SrcRC->getName() << "/"
+                   << DstRC->getName() << " -> "
+                   << NewRC->getName() << ".\n");
       // Allow the coalescer to try again in case either side gets coalesced to
       // a physical register that's compatible with the other side. e.g.
       // r1024 = MOV32to32_ r1025
@@ -1631,8 +1664,8 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
         unsigned Length = li_->getApproximateInstructionCount(JoinVInt);
         float Ratio = 1.0 / Threshold;
         if (Length > Threshold &&
-            (((float)std::distance(mri_->use_begin(JoinVReg),
-                                   mri_->use_end()) / Length) < Ratio)) {
+            (((float)std::distance(mri_->use_nodbg_begin(JoinVReg),
+                                   mri_->use_nodbg_end()) / Length) < Ratio)) {
           mri_->setRegAllocationHint(JoinVInt.reg, 0, JoinPReg);
           ++numAborts;
           DEBUG(dbgs() << "\tMay tie down a physical register, abort!\n");
@@ -1755,6 +1788,23 @@ bool SimpleRegisterCoalescing::JoinCopy(CopyRec &TheCopy, bool &Again) {
 
   UpdateRegDefsUses(SrcReg, DstReg, SubIdx);
 
+  // If we have extended the live range of a physical register, make sure we
+  // update live-in lists as well.
+  if (TargetRegisterInfo::isPhysicalRegister(DstReg)) {
+    const LiveInterval &VRegInterval = li_->getInterval(SrcReg);
+    SmallVector<MachineBasicBlock*, 16> BlockSeq;
+    for (LiveInterval::const_iterator I = VRegInterval.begin(),
+           E = VRegInterval.end(); I != E; ++I ) {
+      li_->findLiveInMBBs(I->start, I->end, BlockSeq);
+      for (unsigned idx = 0, size = BlockSeq.size(); idx != size; ++idx) {
+        MachineBasicBlock &block = *BlockSeq[idx];
+        if (!block.isLiveIn(DstReg))
+          block.addLiveIn(DstReg);
+      }
+      BlockSeq.clear();
+    }
+  }
+
   // SrcReg is guarateed to be the register whose live interval that is
   // being merged.
   li_->removeInterval(SrcReg);
@@ -1849,11 +1899,11 @@ static bool isValNoDefMove(const MachineInstr *MI, unsigned DR, unsigned SR,
   unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
   if (TII->isMoveInstr(*MI, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
     ;
-  else if (MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+  else if (MI->isExtractSubreg()) {
     DstReg = MI->getOperand(0).getReg();
     SrcReg = MI->getOperand(1).getReg();
-  } else if (MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG ||
-             MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+  } else if (MI->isSubregToReg() ||
+             MI->isInsertSubreg()) {
     DstReg = MI->getOperand(0).getReg();
     SrcReg = MI->getOperand(2).getReg();
   } else
@@ -2425,16 +2475,15 @@ void SimpleRegisterCoalescing::CopyCoalesceInMBB(MachineBasicBlock *MBB,
     // If this isn't a copy nor a extract_subreg, we can't join intervals.
     unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
     bool isInsUndef = false;
-    if (Inst->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+    if (Inst->isExtractSubreg()) {
       DstReg = Inst->getOperand(0).getReg();
       SrcReg = Inst->getOperand(1).getReg();
-    } else if (Inst->getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+    } else if (Inst->isInsertSubreg()) {
       DstReg = Inst->getOperand(0).getReg();
       SrcReg = Inst->getOperand(2).getReg();
       if (Inst->getOperand(1).isUndef())
         isInsUndef = true;
-    } else if (Inst->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-               Inst->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
+    } else if (Inst->isInsertSubreg() || Inst->isSubregToReg()) {
       DstReg = Inst->getOperand(0).getReg();
       SrcReg = Inst->getOperand(2).getReg();
     } else if (!tii_->isMoveInstr(*Inst, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
@@ -2549,8 +2598,8 @@ SimpleRegisterCoalescing::differingRegisterClasses(unsigned RegA,
   return !RegClassA->contains(RegB);
 }
 
-/// lastRegisterUse - Returns the last use of the specific register between
-/// cycles Start and End or NULL if there are no uses.
+/// lastRegisterUse - Returns the last (non-debug) use of the specific register
+/// between cycles Start and End or NULL if there are no uses.
 MachineOperand *
 SimpleRegisterCoalescing::lastRegisterUse(SlotIndex Start,
                                           SlotIndex End,
@@ -2559,8 +2608,8 @@ SimpleRegisterCoalescing::lastRegisterUse(SlotIndex Start,
   UseIdx = SlotIndex();
   if (TargetRegisterInfo::isVirtualRegister(Reg)) {
     MachineOperand *LastUse = NULL;
-    for (MachineRegisterInfo::use_iterator I = mri_->use_begin(Reg),
-           E = mri_->use_end(); I != E; ++I) {
+    for (MachineRegisterInfo::use_nodbg_iterator I = mri_->use_nodbg_begin(Reg),
+           E = mri_->use_nodbg_end(); I != E; ++I) {
       MachineOperand &Use = I.getOperand();
       MachineInstr *UseMI = Use.getParent();
       unsigned SrcReg, DstReg, SrcSubIdx, DstSubIdx;
@@ -2670,10 +2719,8 @@ bool SimpleRegisterCoalescing::runOnMachineFunction(MachineFunction &fn) {
         // Delete all coalesced copies.
         bool DoDelete = true;
         if (!tii_->isMoveInstr(*MI, SrcReg, DstReg, SrcSubIdx, DstSubIdx)) {
-          assert((MI->getOpcode() == TargetInstrInfo::EXTRACT_SUBREG ||
-                  MI->getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-                  MI->getOpcode() == TargetInstrInfo::SUBREG_TO_REG) &&
-                 "Unrecognized copy instruction");
+          assert((MI->isExtractSubreg() || MI->isInsertSubreg() ||
+                  MI->isSubregToReg()) && "Unrecognized copy instruction");
           DstReg = MI->getOperand(0).getReg();
           if (TargetRegisterInfo::isPhysicalRegister(DstReg))
             // Do not delete extract_subreg, insert_subreg of physical
diff --git a/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp b/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
index 9558933..8d4d1b2 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SjLjEHPrepare.cpp
@@ -51,6 +51,7 @@ namespace {
     Value *PersonalityFn;
     Constant *SelectorFn;
     Constant *ExceptionFn;
+    Constant *CallSiteFn;
 
     Value *CallSite;
   public:
@@ -116,6 +117,7 @@ bool SjLjEHPass::doInitialization(Module &M) {
   LSDAAddrFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_lsda);
   SelectorFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_selector);
   ExceptionFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_exception);
+  CallSiteFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_callsite);
   PersonalityFn = 0;
 
   return true;
@@ -143,15 +145,14 @@ void SjLjEHPass::markInvokeCallSite(InvokeInst *II, unsigned InvokeNo,
     }
   }
 
-  // Insert a store of the invoke num before the invoke and store zero into the
-  // location afterward.
+  // Insert a store of the invoke num before the invoke
   new StoreInst(CallSiteNoC, CallSite, true, II);  // volatile
+  CallInst::Create(CallSiteFn, CallSiteNoC, "", II);
 
   // Add a switch case to our unwind block.
   CatchSwitch->addCase(SwitchValC, II->getUnwindDest());
-  // We still want this to look like an invoke so we emit the LSDA properly
-  // FIXME: ??? Or will this cause strangeness with mis-matched IDs like
-  //  when it was in the front end?
+  // We still want this to look like an invoke so we emit the LSDA properly,
+  // so we don't transform the invoke into a call here.
 }
 
 /// MarkBlocksLiveIn - Insert BB and all of its predescessors into LiveBBs until
diff --git a/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
index a23efb2..6110ef5 100644
--- a/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/SlotIndexes.cpp
@@ -95,7 +95,7 @@ bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
 
   push_back(createEntry(0, index));
 
-  // Iterate over the the function.
+  // Iterate over the function.
   for (MachineFunction::iterator mbbItr = mf->begin(), mbbEnd = mf->end();
        mbbItr != mbbEnd; ++mbbItr) {
     MachineBasicBlock *mbb = &*mbbItr;
@@ -107,8 +107,8 @@ bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
 
     for (MachineBasicBlock::iterator miItr = mbb->begin(), miEnd = mbb->end();
          miItr != miEnd; ++miItr) {
-      MachineInstr *mi = &*miItr;
-      if (mi->getOpcode()==TargetInstrInfo::DEBUG_VALUE)
+      MachineInstr *mi = miItr;
+      if (mi->isDebugValue())
         continue;
 
       if (miItr == mbb->getFirstTerminator()) {
diff --git a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
index 2170703..12d38f0 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StackSlotColoring.cpp
@@ -504,10 +504,8 @@ bool StackSlotColoring::PropagateBackward(MachineBasicBlock::iterator MII,
 
         // Abort the use is actually a sub-register def. We don't have enough
         // information to figure out if it is really legal.
-        if (MO.getSubReg() ||
-            TID.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG ||
-            TID.getOpcode() == TargetInstrInfo::INSERT_SUBREG ||
-            TID.getOpcode() == TargetInstrInfo::SUBREG_TO_REG)
+        if (MO.getSubReg() || MII->isExtractSubreg() ||
+            MII->isInsertSubreg() || MII->isSubregToReg())
           return false;
 
         const TargetRegisterClass *RC = TID.OpInfo[i].getRegClass(TRI);
@@ -569,8 +567,7 @@ bool StackSlotColoring::PropagateForward(MachineBasicBlock::iterator MII,
 
         // Abort the use is actually a sub-register use. We don't have enough
         // information to figure out if it is really legal.
-        if (MO.getSubReg() ||
-            TID.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG)
+        if (MO.getSubReg() || MII->isExtractSubreg())
           return false;
 
         const TargetRegisterClass *RC = TID.OpInfo[i].getRegClass(TRI);
diff --git a/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp b/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
index bd7cb75..f8f6a55 100644
--- a/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/StrongPHIElimination.cpp
@@ -49,7 +49,7 @@ namespace {
     std::map<unsigned, std::vector<unsigned> > Stacks;
     
     // Registers in UsedByAnother are PHI nodes that are themselves
-    // used as operands to another another PHI node
+    // used as operands to another PHI node
     std::set<unsigned> UsedByAnother;
     
     // RenameSets are the is a map from a PHI-defined register
@@ -419,7 +419,7 @@ void StrongPHIElimination::processBlock(MachineBasicBlock* MBB) {
   
   // Iterate over all the PHI nodes in this block
   MachineBasicBlock::iterator P = MBB->begin();
-  while (P != MBB->end() && P->getOpcode() == TargetInstrInfo::PHI) {
+  while (P != MBB->end() && P->isPHI()) {
     unsigned DestReg = P->getOperand(0).getReg();
     
     // Don't both doing PHI elimination for dead PHI's.
@@ -452,7 +452,7 @@ void StrongPHIElimination::processBlock(MachineBasicBlock* MBB) {
       
       // We don't need to insert copies for implicit_defs.
       MachineInstr* DefMI = MRI.getVRegDef(SrcReg);
-      if (DefMI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF)
+      if (DefMI->isImplicitDef())
         ProcessedNames.insert(SrcReg);
     
       // Check for trivial interferences via liveness information, allowing us
@@ -470,7 +470,7 @@ void StrongPHIElimination::processBlock(MachineBasicBlock* MBB) {
       if (isLiveIn(SrcReg, P->getParent(), LI) ||
           isLiveOut(P->getOperand(0).getReg(),
                     MRI.getVRegDef(SrcReg)->getParent(), LI) ||
-          ( MRI.getVRegDef(SrcReg)->getOpcode() == TargetInstrInfo::PHI &&
+          ( MRI.getVRegDef(SrcReg)->isPHI() &&
             isLiveIn(P->getOperand(0).getReg(),
                      MRI.getVRegDef(SrcReg)->getParent(), LI) ) ||
           ProcessedNames.count(SrcReg) ||
@@ -810,7 +810,7 @@ void StrongPHIElimination::InsertCopies(MachineDomTreeNode* MDTN,
   // Rewrite register uses from Stacks
   for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
       I != E; ++I) {
-    if (I->getOpcode() == TargetInstrInfo::PHI)
+    if (I->isPHI())
       continue;
     
     for (unsigned i = 0; i < I->getNumOperands(); ++i)
@@ -907,8 +907,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
   
   // Determine which phi node operands need copies
   for (MachineFunction::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I)
-    if (!I->empty() &&
-        I->begin()->getOpcode() == TargetInstrInfo::PHI)
+    if (!I->empty() && I->begin()->isPHI())
       processBlock(I);
   
   // Break interferences where two different phis want to coalesce
@@ -996,7 +995,7 @@ bool StrongPHIElimination::runOnMachineFunction(MachineFunction &Fn) {
   for (MachineFunction::iterator I = Fn.begin(), E = Fn.end(); I != E; ++I) {
     for (MachineBasicBlock::iterator BI = I->begin(), BE = I->end();
          BI != BE; ++BI)
-      if (BI->getOpcode() == TargetInstrInfo::PHI)
+      if (BI->isPHI())
         phis.push_back(BI);
   }
   
diff --git a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
index d6860bc..3223e53 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TailDuplication.cpp
@@ -121,7 +121,7 @@ static void VerifyPHIs(MachineFunction &MF, bool CheckExtra) {
                                                 MBB->pred_end());
     MachineBasicBlock::iterator MI = MBB->begin();
     while (MI != MBB->end()) {
-      if (MI->getOpcode() != TargetInstrInfo::PHI)
+      if (!MI->isPHI())
         break;
       for (SmallSetVector<MachineBasicBlock *, 8>::iterator PI = Preds.begin(),
              PE = Preds.end(); PI != PE; ++PI) {
@@ -378,7 +378,7 @@ TailDuplicatePass::UpdateSuccessorsPHIs(MachineBasicBlock *FromBB, bool isDead,
     MachineBasicBlock *SuccBB = *SI;
     for (MachineBasicBlock::iterator II = SuccBB->begin(), EE = SuccBB->end();
          II != EE; ++II) {
-      if (II->getOpcode() != TargetInstrInfo::PHI)
+      if (!II->isPHI())
         break;
       unsigned Idx = 0;
       for (unsigned i = 1, e = II->getNumOperands(); i != e; i += 2) {
@@ -403,26 +403,45 @@ TailDuplicatePass::UpdateSuccessorsPHIs(MachineBasicBlock *FromBB, bool isDead,
             II->RemoveOperand(i);
           }
         }
-        II->RemoveOperand(Idx+1);
-        II->RemoveOperand(Idx);
-      }
+      } else
+        Idx = 0;
+
+      // If Idx is set, the operands at Idx and Idx+1 must be removed.
+      // We reuse the location to avoid expensive RemoveOperand calls.
+
       DenseMap<unsigned,AvailableValsTy>::iterator LI=SSAUpdateVals.find(Reg);
       if (LI != SSAUpdateVals.end()) {
         // This register is defined in the tail block.
         for (unsigned j = 0, ee = LI->second.size(); j != ee; ++j) {
           MachineBasicBlock *SrcBB = LI->second[j].first;
           unsigned SrcReg = LI->second[j].second;
-          II->addOperand(MachineOperand::CreateReg(SrcReg, false));
-          II->addOperand(MachineOperand::CreateMBB(SrcBB));
+          if (Idx != 0) {
+            II->getOperand(Idx).setReg(SrcReg);
+            II->getOperand(Idx+1).setMBB(SrcBB);
+            Idx = 0;
+          } else {
+            II->addOperand(MachineOperand::CreateReg(SrcReg, false));
+            II->addOperand(MachineOperand::CreateMBB(SrcBB));
+          }
         }
       } else {
         // Live in tail block, must also be live in predecessors.
         for (unsigned j = 0, ee = TDBBs.size(); j != ee; ++j) {
           MachineBasicBlock *SrcBB = TDBBs[j];
-          II->addOperand(MachineOperand::CreateReg(Reg, false));
-          II->addOperand(MachineOperand::CreateMBB(SrcBB));
+          if (Idx != 0) {
+            II->getOperand(Idx).setReg(Reg);
+            II->getOperand(Idx+1).setMBB(SrcBB);
+            Idx = 0;
+          } else {
+            II->addOperand(MachineOperand::CreateReg(Reg, false));
+            II->addOperand(MachineOperand::CreateMBB(SrcBB));
+          }
         }
       }
+      if (Idx != 0) {
+        II->RemoveOperand(Idx+1);
+        II->RemoveOperand(Idx);
+      }
     }
   }
 }
@@ -476,7 +495,7 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
     if (InstrCount == MaxDuplicateCount) return false;
     // Remember if we saw a call.
     if (I->getDesc().isCall()) HasCall = true;
-    if (I->getOpcode() != TargetInstrInfo::PHI)
+    if (!I->isPHI())
       InstrCount += 1;
   }
   // Heuristically, don't tail-duplicate calls if it would expand code size,
@@ -528,7 +547,7 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
     while (I != TailBB->end()) {
       MachineInstr *MI = &*I;
       ++I;
-      if (MI->getOpcode() == TargetInstrInfo::PHI) {
+      if (MI->isPHI()) {
         // Replace the uses of the def of the PHI with the register coming
         // from PredBB.
         ProcessPHI(MI, TailBB, PredBB, LocalVRMap, CopyInfos);
@@ -580,7 +599,7 @@ TailDuplicatePass::TailDuplicate(MachineBasicBlock *TailBB, MachineFunction &MF,
       SmallVector<std::pair<unsigned,unsigned>, 4> CopyInfos;
       MachineBasicBlock::iterator I = TailBB->begin();
       // Process PHI instructions first.
-      while (I != TailBB->end() && I->getOpcode() == TargetInstrInfo::PHI) {
+      while (I != TailBB->end() && I->isPHI()) {
         // Replace the uses of the def of the PHI with the register coming
         // from PredBB.
         MachineInstr *MI = &*I++;
diff --git a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
index a3f6364..6f4ca82 100644
--- a/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
@@ -213,6 +213,9 @@ bool TwoAddressInstructionPass::Sink3AddrInstruction(MachineBasicBlock *MBB,
   unsigned NumVisited = 0;
   for (MachineBasicBlock::iterator I = llvm::next(OldPos); I != KillPos; ++I) {
     MachineInstr *OtherMI = I;
+    // DBG_VALUE cannot be counted against the limit.
+    if (OtherMI->isDebugValue())
+      continue;
     if (NumVisited > 30)  // FIXME: Arbitrary limit to reduce compile time cost.
       return false;
     ++NumVisited;
@@ -316,7 +319,7 @@ bool TwoAddressInstructionPass::NoUseAfterLastDef(unsigned Reg,
          E = MRI->reg_end(); I != E; ++I) {
     MachineOperand &MO = I.getOperand();
     MachineInstr *MI = MO.getParent();
-    if (MI->getParent() != MBB)
+    if (MI->getParent() != MBB || MI->isDebugValue())
       continue;
     DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(MI);
     if (DI == DistanceMap.end())
@@ -339,7 +342,7 @@ MachineInstr *TwoAddressInstructionPass::FindLastUseInMBB(unsigned Reg,
          E = MRI->reg_end(); I != E; ++I) {
     MachineOperand &MO = I.getOperand();
     MachineInstr *MI = MO.getParent();
-    if (MI->getParent() != MBB)
+    if (MI->getParent() != MBB || MI->isDebugValue())
       continue;
     DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(MI);
     if (DI == DistanceMap.end())
@@ -365,13 +368,13 @@ static bool isCopyToReg(MachineInstr &MI, const TargetInstrInfo *TII,
   DstReg = 0;
   unsigned SrcSubIdx, DstSubIdx;
   if (!TII->isMoveInstr(MI, SrcReg, DstReg, SrcSubIdx, DstSubIdx)) {
-    if (MI.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG) {
+    if (MI.isExtractSubreg()) {
       DstReg = MI.getOperand(0).getReg();
       SrcReg = MI.getOperand(1).getReg();
-    } else if (MI.getOpcode() == TargetInstrInfo::INSERT_SUBREG) {
+    } else if (MI.isInsertSubreg()) {
       DstReg = MI.getOperand(0).getReg();
       SrcReg = MI.getOperand(2).getReg();
-    } else if (MI.getOpcode() == TargetInstrInfo::SUBREG_TO_REG) {
+    } else if (MI.isSubregToReg()) {
       DstReg = MI.getOperand(0).getReg();
       SrcReg = MI.getOperand(2).getReg();
     }
@@ -429,8 +432,7 @@ static bool isKilled(MachineInstr &MI, unsigned Reg,
 /// as a two-address use. If so, return the destination register by reference.
 static bool isTwoAddrUse(MachineInstr &MI, unsigned Reg, unsigned &DstReg) {
   const TargetInstrDesc &TID = MI.getDesc();
-  unsigned NumOps = (MI.getOpcode() == TargetInstrInfo::INLINEASM)
-    ? MI.getNumOperands() : TID.getNumOperands();
+  unsigned NumOps = MI.isInlineAsm() ? MI.getNumOperands():TID.getNumOperands();
   for (unsigned i = 0; i != NumOps; ++i) {
     const MachineOperand &MO = MI.getOperand(i);
     if (!MO.isReg() || !MO.isUse() || MO.getReg() != Reg)
@@ -452,11 +454,11 @@ MachineInstr *findOnlyInterestingUse(unsigned Reg, MachineBasicBlock *MBB,
                                      const TargetInstrInfo *TII,
                                      bool &IsCopy,
                                      unsigned &DstReg, bool &IsDstPhys) {
-  MachineRegisterInfo::use_iterator UI = MRI->use_begin(Reg);
-  if (UI == MRI->use_end())
+  MachineRegisterInfo::use_nodbg_iterator UI = MRI->use_nodbg_begin(Reg);
+  if (UI == MRI->use_nodbg_end())
     return 0;
   MachineInstr &UseMI = *UI;
-  if (++UI != MRI->use_end())
+  if (++UI != MRI->use_nodbg_end())
     // More than one use.
     return 0;
   if (UseMI.getParent() != MBB)
@@ -924,6 +926,10 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &MF) {
     for (MachineBasicBlock::iterator mi = mbbi->begin(), me = mbbi->end();
          mi != me; ) {
       MachineBasicBlock::iterator nmi = llvm::next(mi);
+      if (mi->isDebugValue()) {
+        mi = nmi;
+        continue;
+      }
       const TargetInstrDesc &TID = mi->getDesc();
       bool FirstTied = true;
 
@@ -933,7 +939,7 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &MF) {
 
       // First scan through all the tied register uses in this instruction
       // and record a list of pairs of tied operands for each register.
-      unsigned NumOps = (mi->getOpcode() == TargetInstrInfo::INLINEASM)
+      unsigned NumOps = mi->isInlineAsm()
         ? mi->getNumOperands() : TID.getNumOperands();
       for (unsigned SrcIdx = 0; SrcIdx < NumOps; ++SrcIdx) {
         unsigned DstIdx = 0;
diff --git a/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp b/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
index 6ab5db2..b0f0a07 100644
--- a/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/UnreachableBlockElim.cpp
@@ -148,8 +148,7 @@ bool UnreachableMachineBlockElim::runOnMachineFunction(MachineFunction &F) {
         MachineBasicBlock* succ = *BB->succ_begin();
 
         MachineBasicBlock::iterator start = succ->begin();
-        while (start != succ->end() &&
-               start->getOpcode() == TargetInstrInfo::PHI) {
+        while (start != succ->end() && start->isPHI()) {
           for (unsigned i = start->getNumOperands() - 1; i >= 2; i-=2)
             if (start->getOperand(i).isMBB() &&
                 start->getOperand(i).getMBB() == BB) {
@@ -188,8 +187,7 @@ bool UnreachableMachineBlockElim::runOnMachineFunction(MachineFunction &F) {
     SmallPtrSet<MachineBasicBlock*, 8> preds(BB->pred_begin(),
                                              BB->pred_end());
     MachineBasicBlock::iterator phi = BB->begin();
-    while (phi != BB->end() &&
-           phi->getOpcode() == TargetInstrInfo::PHI) {
+    while (phi != BB->end() && phi->isPHI()) {
       for (unsigned i = phi->getNumOperands() - 1; i >= 2; i-=2)
         if (!preds.count(phi->getOperand(i).getMBB())) {
           phi->RemoveOperand(i);
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
index d4fb2e4..5956b61 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegMap.cpp
@@ -9,7 +9,7 @@
 //
 // This file implements the VirtRegMap class.
 //
-// It also contains implementations of the the Spiller interface, which, given a
+// It also contains implementations of the Spiller interface, which, given a
 // virtual register map and a machine function, eliminates all virtual
 // references by replacing them with physical register references - adding spill
 // code as necessary.
diff --git a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
index df2b8d2..84e0398 100644
--- a/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
+++ b/libclamav/c++/llvm/lib/CodeGen/VirtRegRewriter.cpp
@@ -62,6 +62,7 @@ VirtRegRewriter::~VirtRegRewriter() {}
 
 /// substitutePhysReg - Replace virtual register in MachineOperand with a
 /// physical register. Do the right thing with the sub-register index.
+/// Note that operands may be added, so the MO reference is no longer valid.
 static void substitutePhysReg(MachineOperand &MO, unsigned Reg,
                               const TargetRegisterInfo &TRI) {
   if (unsigned SubIdx = MO.getSubReg()) {
@@ -123,14 +124,15 @@ struct TrivialRewriter : public VirtRegRewriter {
           continue;
         unsigned pReg = VRM.getPhys(reg);
         mri->setPhysRegUsed(pReg);
-        for (MachineRegisterInfo::reg_iterator regItr = mri->reg_begin(reg),
-             regEnd = mri->reg_end(); regItr != regEnd;) {
-          MachineOperand &mop = regItr.getOperand();
-          assert(mop.isReg() && mop.getReg() == reg && "reg_iterator broken?");
-          ++regItr;
-          substitutePhysReg(mop, pReg, *tri);
-          changed = true;
-        }
+        // Copy the register use-list before traversing it.
+        SmallVector<std::pair<MachineInstr*, unsigned>, 32> reglist;
+        for (MachineRegisterInfo::reg_iterator I = mri->reg_begin(reg),
+               E = mri->reg_end(); I != E; ++I)
+          reglist.push_back(std::make_pair(&*I, I.getOperandNo()));
+        for (unsigned N=0; N != reglist.size(); ++N)
+          substitutePhysReg(reglist[N].first->getOperand(reglist[N].second),
+                            pReg, *tri);
+        changed |= !reglist.empty();
       }
     }
     
@@ -1759,7 +1761,7 @@ private:
 
             // Mark is killed.
             MachineInstr *CopyMI = prior(InsertLoc);
-            CopyMI->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
+            CopyMI->setAsmPrinterFlag(MachineInstr::ReloadReuse);
             MachineOperand *KillOpnd = CopyMI->findRegisterUseOperand(InReg);
             KillOpnd->setIsKill();
             UpdateKills(*CopyMI, TRI, RegKills, KillOps);
@@ -1850,31 +1852,30 @@ private:
       KilledMIRegs.clear();
       for (unsigned j = 0, e = VirtUseOps.size(); j != e; ++j) {
         unsigned i = VirtUseOps[j];
-        MachineOperand &MO = MI.getOperand(i);
-        unsigned VirtReg = MO.getReg();
+        unsigned VirtReg = MI.getOperand(i).getReg();
         assert(TargetRegisterInfo::isVirtualRegister(VirtReg) &&
                "Not a virtual register?");
 
-        unsigned SubIdx = MO.getSubReg();
+        unsigned SubIdx = MI.getOperand(i).getSubReg();
         if (VRM.isAssignedReg(VirtReg)) {
           // This virtual register was assigned a physreg!
           unsigned Phys = VRM.getPhys(VirtReg);
           RegInfo->setPhysRegUsed(Phys);
-          if (MO.isDef())
+          if (MI.getOperand(i).isDef())
             ReusedOperands.markClobbered(Phys);
-          substitutePhysReg(MO, Phys, *TRI);
+          substitutePhysReg(MI.getOperand(i), Phys, *TRI);
           if (VRM.isImplicitlyDefined(VirtReg))
             // FIXME: Is this needed?
             BuildMI(MBB, &MI, MI.getDebugLoc(),
-                    TII->get(TargetInstrInfo::IMPLICIT_DEF), Phys);
+                    TII->get(TargetOpcode::IMPLICIT_DEF), Phys);
           continue;
         }
 
         // This virtual register is now known to be a spilled value.
-        if (!MO.isUse())
+        if (!MI.getOperand(i).isUse())
           continue;  // Handle defs in the loop below (handle use&def here though)
 
-        bool AvoidReload = MO.isUndef();
+        bool AvoidReload = MI.getOperand(i).isUndef();
         // Check if it is defined by an implicit def. It should not be spilled.
         // Note, this is for correctness reason. e.g.
         // 8   %reg1024<def> = IMPLICIT_DEF
@@ -1902,8 +1903,7 @@ private:
         //       = EXTRACT_SUBREG fi#1
         // fi#1 is available in EDI, but it cannot be reused because it's not in
         // the right register file.
-        if (PhysReg && !AvoidReload &&
-            (SubIdx || MI.getOpcode() == TargetInstrInfo::EXTRACT_SUBREG)) {
+        if (PhysReg && !AvoidReload && (SubIdx || MI.isExtractSubreg())) {
           const TargetRegisterClass* RC = RegInfo->getRegClass(VirtReg);
           if (!RC->contains(PhysReg))
             PhysReg = 0;
@@ -2038,7 +2038,7 @@ private:
           TII->copyRegToReg(MBB, InsertLoc, DesignatedReg, PhysReg, RC, RC);
 
           MachineInstr *CopyMI = prior(InsertLoc);
-          CopyMI->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
+          CopyMI->setAsmPrinterFlag(MachineInstr::ReloadReuse);
           UpdateKills(*CopyMI, TRI, RegKills, KillOps);
 
           // This invalidates DesignatedReg.
@@ -2167,7 +2167,7 @@ private:
                 // virtual or needing to clobber any values if it's physical).
                 NextMII = &MI;
                 --NextMII;  // backtrack to the copy.
-                NextMII->setAsmPrinterFlag(AsmPrinter::ReloadReuse);
+                NextMII->setAsmPrinterFlag(MachineInstr::ReloadReuse);
                 // Propagate the sub-register index over.
                 if (SubIdx) {
                   DefMO = NextMII->findRegisterDefOperand(DestReg);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
index 1409519..a31ce38 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
@@ -18,7 +18,6 @@
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/ExecutionEngine/GenericValue.h"
 #include "llvm/ADT/Statistic.h"
 #include "llvm/Support/Debug.h"
@@ -36,25 +35,29 @@ using namespace llvm;
 STATISTIC(NumInitBytes, "Number of bytes of global vars initialized");
 STATISTIC(NumGlobals  , "Number of global vars initialized");
 
-ExecutionEngine *(*ExecutionEngine::JITCtor)(ModuleProvider *MP,
-                                             std::string *ErrorStr,
-                                             JITMemoryManager *JMM,
-                                             CodeGenOpt::Level OptLevel,
-                                             bool GVsWithCode,
-					     CodeModel::Model CMM) = 0;
-ExecutionEngine *(*ExecutionEngine::InterpCtor)(ModuleProvider *MP,
+ExecutionEngine *(*ExecutionEngine::JITCtor)(
+  Module *M,
+  std::string *ErrorStr,
+  JITMemoryManager *JMM,
+  CodeGenOpt::Level OptLevel,
+  bool GVsWithCode,
+  CodeModel::Model CMM,
+  StringRef MArch,
+  StringRef MCPU,
+  const SmallVectorImpl<std::string>& MAttrs) = 0;
+ExecutionEngine *(*ExecutionEngine::InterpCtor)(Module *M,
                                                 std::string *ErrorStr) = 0;
 ExecutionEngine::EERegisterFn ExecutionEngine::ExceptionTableRegister = 0;
 
 
-ExecutionEngine::ExecutionEngine(ModuleProvider *P)
+ExecutionEngine::ExecutionEngine(Module *M)
   : EEState(*this),
     LazyFunctionCreator(0) {
   CompilingLazily         = false;
   GVCompilationDisabled   = false;
   SymbolSearchingDisabled = false;
-  Modules.push_back(P);
-  assert(P && "ModuleProvider is null?");
+  Modules.push_back(M);
+  assert(M && "Module is null?");
 }
 
 ExecutionEngine::~ExecutionEngine() {
@@ -69,38 +72,18 @@ char* ExecutionEngine::getMemoryForGV(const GlobalVariable* GV) {
   return new char[GVSize];
 }
 
-/// removeModuleProvider - Remove a ModuleProvider from the list of modules.
-/// Relases the Module from the ModuleProvider, materializing it in the
-/// process, and returns the materialized Module.
-Module* ExecutionEngine::removeModuleProvider(ModuleProvider *P, 
-                                              std::string *ErrInfo) {
-  for(SmallVector<ModuleProvider *, 1>::iterator I = Modules.begin(), 
+/// removeModule - Remove a Module from the list of modules.
+bool ExecutionEngine::removeModule(Module *M) {
+  for(SmallVector<Module *, 1>::iterator I = Modules.begin(), 
         E = Modules.end(); I != E; ++I) {
-    ModuleProvider *MP = *I;
-    if (MP == P) {
+    Module *Found = *I;
+    if (Found == M) {
       Modules.erase(I);
-      clearGlobalMappingsFromModule(MP->getModule());
-      return MP->releaseModule(ErrInfo);
-    }
-  }
-  return NULL;
-}
-
-/// deleteModuleProvider - Remove a ModuleProvider from the list of modules,
-/// and deletes the ModuleProvider and owned Module.  Avoids materializing 
-/// the underlying module.
-void ExecutionEngine::deleteModuleProvider(ModuleProvider *P, 
-                                           std::string *ErrInfo) {
-  for(SmallVector<ModuleProvider *, 1>::iterator I = Modules.begin(), 
-      E = Modules.end(); I != E; ++I) {
-    ModuleProvider *MP = *I;
-    if (MP == P) {
-      Modules.erase(I);
-      clearGlobalMappingsFromModule(MP->getModule());
-      delete MP;
-      return;
+      clearGlobalMappingsFromModule(M);
+      return true;
     }
   }
+  return false;
 }
 
 /// FindFunctionNamed - Search all of the active modules to find the one that
@@ -108,7 +91,7 @@ void ExecutionEngine::deleteModuleProvider(ModuleProvider *P,
 /// general code.
 Function *ExecutionEngine::FindFunctionNamed(const char *FnName) {
   for (unsigned i = 0, e = Modules.size(); i != e; ++i) {
-    if (Function *F = Modules[i]->getModule()->getFunction(FnName))
+    if (Function *F = Modules[i]->getFunction(FnName))
       return F;
   }
   return 0;
@@ -316,7 +299,7 @@ void ExecutionEngine::runStaticConstructorsDestructors(Module *module,
 void ExecutionEngine::runStaticConstructorsDestructors(bool isDtors) {
   // Execute global ctors/dtors for each module in the program.
   for (unsigned m = 0, e = Modules.size(); m != e; ++m)
-    runStaticConstructorsDestructors(Modules[m]->getModule(), isDtors);
+    runStaticConstructorsDestructors(Modules[m], isDtors);
 }
 
 #ifndef NDEBUG
@@ -393,12 +376,12 @@ int ExecutionEngine::runFunctionAsMain(Function *Fn,
 /// Interpreter or there's an error. If even an Interpreter cannot be created,
 /// NULL is returned.
 ///
-ExecutionEngine *ExecutionEngine::create(ModuleProvider *MP,
+ExecutionEngine *ExecutionEngine::create(Module *M,
                                          bool ForceInterpreter,
                                          std::string *ErrorStr,
                                          CodeGenOpt::Level OptLevel,
                                          bool GVsWithCode) {
-  return EngineBuilder(MP)
+  return EngineBuilder(M)
       .setEngineKind(ForceInterpreter
                      ? EngineKind::Interpreter
                      : EngineKind::JIT)
@@ -408,16 +391,6 @@ ExecutionEngine *ExecutionEngine::create(ModuleProvider *MP,
       .create();
 }
 
-ExecutionEngine *ExecutionEngine::create(Module *M) {
-  return EngineBuilder(M).create();
-}
-
-/// EngineBuilder - Overloaded constructor that automatically creates an
-/// ExistingModuleProvider for an existing module.
-EngineBuilder::EngineBuilder(Module *m) : MP(new ExistingModuleProvider(m)) {
-  InitEngine();
-}
-
 ExecutionEngine *EngineBuilder::create() {
   // Make sure we can resolve symbols in the program as well. The zero arg
   // to the function tells DynamicLibrary to load the program, not a library.
@@ -443,8 +416,9 @@ ExecutionEngine *EngineBuilder::create() {
   if (WhichEngine & EngineKind::JIT) {
     if (ExecutionEngine::JITCtor) {
       ExecutionEngine *EE =
-        ExecutionEngine::JITCtor(MP, ErrorStr, JMM, OptLevel,
-                                 AllocateGVsWithCode, CMModel);
+        ExecutionEngine::JITCtor(M, ErrorStr, JMM, OptLevel,
+                                 AllocateGVsWithCode, CMModel,
+                                 MArch, MCPU, MAttrs);
       if (EE) return EE;
     }
   }
@@ -453,7 +427,7 @@ ExecutionEngine *EngineBuilder::create() {
   // an interpreter instead.
   if (WhichEngine & EngineKind::Interpreter) {
     if (ExecutionEngine::InterpCtor)
-      return ExecutionEngine::InterpCtor(MP, ErrorStr);
+      return ExecutionEngine::InterpCtor(M, ErrorStr);
     if (ErrorStr)
       *ErrorStr = "Interpreter has not been linked in.";
     return 0;
@@ -969,7 +943,7 @@ void ExecutionEngine::emitGlobals() {
 
   if (Modules.size() != 1) {
     for (unsigned m = 0, e = Modules.size(); m != e; ++m) {
-      Module &M = *Modules[m]->getModule();
+      Module &M = *Modules[m];
       for (Module::const_global_iterator I = M.global_begin(),
            E = M.global_end(); I != E; ++I) {
         const GlobalValue *GV = I;
@@ -1003,7 +977,7 @@ void ExecutionEngine::emitGlobals() {
   
   std::vector<const GlobalValue*> NonCanonicalGlobals;
   for (unsigned m = 0, e = Modules.size(); m != e; ++m) {
-    Module &M = *Modules[m]->getModule();
+    Module &M = *Modules[m];
     for (Module::const_global_iterator I = M.global_begin(), E = M.global_end();
          I != E; ++I) {
       // In the multi-module case, see what this global maps to.
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
index 412b493..141cb27 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
@@ -174,20 +174,16 @@ void LLVMFreeMachineCodeForFunction(LLVMExecutionEngineRef EE, LLVMValueRef F) {
 }
 
 void LLVMAddModuleProvider(LLVMExecutionEngineRef EE, LLVMModuleProviderRef MP){
-  unwrap(EE)->addModuleProvider(unwrap(MP));
+  unwrap(EE)->addModule(unwrap(MP));
 }
 
 LLVMBool LLVMRemoveModuleProvider(LLVMExecutionEngineRef EE,
                                   LLVMModuleProviderRef MP,
                                   LLVMModuleRef *OutMod, char **OutError) {
-  std::string Error;
-  if (Module *Gone = unwrap(EE)->removeModuleProvider(unwrap(MP), &Error)) {
-    *OutMod = wrap(Gone);
-    return 0;
-  }
-  if (OutError)
-    *OutError = strdup(Error.c_str());
-  return 1;
+  Module *M = unwrap(MP);
+  unwrap(EE)->removeModule(M);
+  *OutMod = wrap(M);
+  return 0;
 }
 
 LLVMBool LLVMFindFunction(LLVMExecutionEngineRef EE, const char *Name,
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
index c02d84f..7b061d3 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
@@ -368,7 +368,7 @@ GenericValue lle_X_sprintf(const FunctionType *FT,
 
       switch (Last) {
       case '%':
-        strcpy(Buffer, "%"); break;
+        memcpy(Buffer, "%", 2); break;
       case 'c':
         sprintf(Buffer, FmtBuf, uint32_t(Args[ArgNo++].IntVal.getZExtValue()));
         break;
@@ -400,8 +400,9 @@ GenericValue lle_X_sprintf(const FunctionType *FT,
         errs() << "<unknown printf code '" << *FmtStr << "'!>";
         ArgNo++; break;
       }
-      strcpy(OutputBuffer, Buffer);
-      OutputBuffer += strlen(Buffer);
+      size_t Len = strlen(Buffer);
+      memcpy(OutputBuffer, Buffer, Len + 1);
+      OutputBuffer += Len;
       }
       break;
     }
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
index 9be6a92..43e3453 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
@@ -17,7 +17,6 @@
 #include "llvm/CodeGen/IntrinsicLowering.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include <cstring>
 using namespace llvm;
 
@@ -33,20 +32,20 @@ extern "C" void LLVMLinkInInterpreter() { }
 
 /// create - Create a new interpreter object.  This can never fail.
 ///
-ExecutionEngine *Interpreter::create(ModuleProvider *MP, std::string* ErrStr) {
-  // Tell this ModuleProvide to materialize and release the module
-  if (!MP->materializeModule(ErrStr))
+ExecutionEngine *Interpreter::create(Module *M, std::string* ErrStr) {
+  // Tell this Module to materialize everything and release the GVMaterializer.
+  if (M->MaterializeAllPermanently(ErrStr))
     // We got an error, just return 0
     return 0;
 
-  return new Interpreter(MP);
+  return new Interpreter(M);
 }
 
 //===----------------------------------------------------------------------===//
 // Interpreter ctor - Initialize stuff
 //
-Interpreter::Interpreter(ModuleProvider *M)
-  : ExecutionEngine(M), TD(M->getModule()) {
+Interpreter::Interpreter(Module *M)
+  : ExecutionEngine(M), TD(M) {
       
   memset(&ExitValue.Untyped, 0, sizeof(ExitValue.Untyped));
   setTargetData(&TD);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
index 038830c..bc4200b 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
@@ -94,7 +94,7 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
   std::vector<Function*> AtExitHandlers;
 
 public:
-  explicit Interpreter(ModuleProvider *M);
+  explicit Interpreter(Module *M);
   ~Interpreter();
 
   /// runAtExitHandlers - Run any functions registered by the program's calls to
@@ -108,7 +108,7 @@ public:
   
   /// create - Create an interpreter ExecutionEngine. This can never fail.
   ///
-  static ExecutionEngine *create(ModuleProvider *M, std::string *ErrorStr = 0);
+  static ExecutionEngine *create(Module *M, std::string *ErrorStr = 0);
 
   /// run - Start execution with the specified function and arguments.
   ///
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
index 2612bf2..236c219 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.cpp
@@ -18,7 +18,7 @@
 #include "llvm/Function.h"
 #include "llvm/GlobalVariable.h"
 #include "llvm/Instructions.h"
-#include "llvm/ModuleProvider.h"
+#include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/CodeGen/JITCodeEmitter.h"
 #include "llvm/CodeGen/MachineCodeInfo.h"
 #include "llvm/ExecutionEngine/GenericValue.h"
@@ -28,6 +28,7 @@
 #include "llvm/Target/TargetJITInfo.h"
 #include "llvm/Support/Dwarf.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/MutexGuard.h"
 #include "llvm/System/DynamicLibrary.h"
 #include "llvm/Config/config.h"
@@ -172,7 +173,7 @@ void DarwinRegisterFrame(void* FrameBegin) {
   ob->encoding.i = 0; 
   ob->encoding.b.encoding = llvm::dwarf::DW_EH_PE_omit;
   
-  // Put the info on both places, as libgcc uses the first or the the second
+  // Put the info on both places, as libgcc uses the first or the second
   // field. Note that we rely on having two pointers here. If fde_end was a
   // char, things would get complicated.
   ob->fde_end = (char*)LOI->unseenObjects;
@@ -193,22 +194,31 @@ void DarwinRegisterFrame(void* FrameBegin) {
 
 /// createJIT - This is the factory method for creating a JIT for the current
 /// machine, it does not fall back to the interpreter.  This takes ownership
-/// of the module provider.
-ExecutionEngine *ExecutionEngine::createJIT(ModuleProvider *MP,
+/// of the module.
+ExecutionEngine *ExecutionEngine::createJIT(Module *M,
                                             std::string *ErrorStr,
                                             JITMemoryManager *JMM,
                                             CodeGenOpt::Level OptLevel,
                                             bool GVsWithCode,
-					    CodeModel::Model CMM) {
-  return JIT::createJIT(MP, ErrorStr, JMM, OptLevel, GVsWithCode, CMM);
+                                            CodeModel::Model CMM) {
+  // Use the defaults for extra parameters.  Users can use EngineBuilder to
+  // set them.
+  StringRef MArch = "";
+  StringRef MCPU = "";
+  SmallVector<std::string, 1> MAttrs;
+  return JIT::createJIT(M, ErrorStr, JMM, OptLevel, GVsWithCode, CMM,
+                        MArch, MCPU, MAttrs);
 }
 
-ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
+ExecutionEngine *JIT::createJIT(Module *M,
                                 std::string *ErrorStr,
                                 JITMemoryManager *JMM,
                                 CodeGenOpt::Level OptLevel,
                                 bool GVsWithCode,
-                                CodeModel::Model CMM) {
+                                CodeModel::Model CMM,
+                                StringRef MArch,
+                                StringRef MCPU,
+                                const SmallVectorImpl<std::string>& MAttrs) {
   // Make sure we can resolve symbols in the program as well. The zero arg
   // to the function tells DynamicLibrary to load the program, not a library.
 /* CLAMAV LOCAL: no dlopen */
@@ -216,13 +226,13 @@ ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
 //   return 0;
 
   // Pick a target either via -march or by guessing the native arch.
-  TargetMachine *TM = JIT::selectTarget(MP, ErrorStr);
+  TargetMachine *TM = JIT::selectTarget(M, MArch, MCPU, MAttrs, ErrorStr);
   if (!TM || (ErrorStr && ErrorStr->length() > 0)) return 0;
   TM->setCodeModel(CMM);
 
   // If the target supports JIT code generation, create a the JIT.
   if (TargetJITInfo *TJ = TM->getJITInfo()) {
-    return new JIT(MP, *TM, *TJ, JMM, OptLevel, GVsWithCode);
+    return new JIT(M, *TM, *TJ, JMM, OptLevel, GVsWithCode);
   } else {
     if (ErrorStr)
       *ErrorStr = "target does not support JIT code generation";
@@ -230,16 +240,63 @@ ExecutionEngine *JIT::createJIT(ModuleProvider *MP,
   }
 }
 
-JIT::JIT(ModuleProvider *MP, TargetMachine &tm, TargetJITInfo &tji,
+namespace {
+/// This class supports the global getPointerToNamedFunction(), which allows
+/// bugpoint or gdb users to search for a function by name without any context.
+class JitPool {
+  SmallPtrSet<JIT*, 1> JITs;  // Optimize for process containing just 1 JIT.
+  mutable sys::Mutex Lock;
+public:
+  void Add(JIT *jit) {
+    MutexGuard guard(Lock);
+    JITs.insert(jit);
+  }
+  void Remove(JIT *jit) {
+    MutexGuard guard(Lock);
+    JITs.erase(jit);
+  }
+  void *getPointerToNamedFunction(const char *Name) const {
+    MutexGuard guard(Lock);
+    assert(JITs.size() != 0 && "No Jit registered");
+    //search function in every instance of JIT
+    for (SmallPtrSet<JIT*, 1>::const_iterator Jit = JITs.begin(),
+           end = JITs.end();
+         Jit != end; ++Jit) {
+      if (Function *F = (*Jit)->FindFunctionNamed(Name))
+        return (*Jit)->getPointerToFunction(F);
+    }
+    // The function is not available : fallback on the first created (will
+    // search in symbol of the current program/library)
+    return (*JITs.begin())->getPointerToNamedFunction(Name);
+  }
+};
+ManagedStatic<JitPool> AllJits;
+}
+extern "C" {
+  // getPointerToNamedFunction - This function is used as a global wrapper to
+  // JIT::getPointerToNamedFunction for the purpose of resolving symbols when
+  // bugpoint is debugging the JIT. In that scenario, we are loading an .so and
+  // need to resolve function(s) that are being mis-codegenerated, so we need to
+  // resolve their addresses at runtime, and this is the way to do it.
+  void *getPointerToNamedFunction(const char *Name) {
+    return AllJits->getPointerToNamedFunction(Name);
+  }
+}
+
+JIT::JIT(Module *M, TargetMachine &tm, TargetJITInfo &tji,
          JITMemoryManager *JMM, CodeGenOpt::Level OptLevel, bool GVsWithCode)
-  : ExecutionEngine(MP), TM(tm), TJI(tji), AllocateGVsWithCode(GVsWithCode) {
+  : ExecutionEngine(M), TM(tm), TJI(tji), AllocateGVsWithCode(GVsWithCode),
+    isAlreadyCodeGenerating(false) {
   setTargetData(TM.getTargetData());
 
-  jitstate = new JITState(MP);
+  jitstate = new JITState(M);
 
   // Initialize JCE
   JCE = createEmitter(*this, JMM, TM);
 
+  // Register in global list of all JITs.
+  AllJits->Add(this);
+
   // Add target data
   MutexGuard locked(lock);
   FunctionPassManager &PM = jitstate->getPM(locked);
@@ -274,21 +331,21 @@ JIT::JIT(ModuleProvider *MP, TargetMachine &tm, TargetJITInfo &tji,
 }
 
 JIT::~JIT() {
+  AllJits->Remove(this);
   delete jitstate;
   delete JCE;
   delete &TM;
 }
 
-/// addModuleProvider - Add a new ModuleProvider to the JIT.  If we previously
-/// removed the last ModuleProvider, we need re-initialize jitstate with a valid
-/// ModuleProvider.
-void JIT::addModuleProvider(ModuleProvider *MP) {
+/// addModule - Add a new Module to the JIT.  If we previously removed the last
+/// Module, we need re-initialize jitstate with a valid Module.
+void JIT::addModule(Module *M) {
   MutexGuard locked(lock);
 
   if (Modules.empty()) {
     assert(!jitstate && "jitstate should be NULL if Modules vector is empty!");
 
-    jitstate = new JITState(MP);
+    jitstate = new JITState(M);
 
     FunctionPassManager &PM = jitstate->getPM(locked);
     PM.add(new TargetData(*TM.getTargetData()));
@@ -303,18 +360,17 @@ void JIT::addModuleProvider(ModuleProvider *MP) {
     PM.doInitialization();
   }
   
-  ExecutionEngine::addModuleProvider(MP);
+  ExecutionEngine::addModule(M);
 }
 
-/// removeModuleProvider - If we are removing the last ModuleProvider, 
-/// invalidate the jitstate since the PassManager it contains references a
-/// released ModuleProvider.
-Module *JIT::removeModuleProvider(ModuleProvider *MP, std::string *E) {
-  Module *result = ExecutionEngine::removeModuleProvider(MP, E);
+/// removeModule - If we are removing the last Module, invalidate the jitstate
+/// since the PassManager it contains references a released Module.
+bool JIT::removeModule(Module *M) {
+  bool result = ExecutionEngine::removeModule(M);
   
   MutexGuard locked(lock);
   
-  if (jitstate->getMP() == MP) {
+  if (jitstate->getModule() == M) {
     delete jitstate;
     jitstate = 0;
   }
@@ -337,62 +393,6 @@ Module *JIT::removeModuleProvider(ModuleProvider *MP, std::string *E) {
   return result;
 }
 
-/// deleteModuleProvider - Remove a ModuleProvider from the list of modules,
-/// and deletes the ModuleProvider and owned Module.  Avoids materializing 
-/// the underlying module.
-void JIT::deleteModuleProvider(ModuleProvider *MP, std::string *E) {
-  ExecutionEngine::deleteModuleProvider(MP, E);
-  
-  MutexGuard locked(lock);
-  
-  if (jitstate->getMP() == MP) {
-    delete jitstate;
-    jitstate = 0;
-  }
-
-  if (!jitstate && !Modules.empty()) {
-    jitstate = new JITState(Modules[0]);
-    
-    FunctionPassManager &PM = jitstate->getPM(locked);
-    PM.add(new TargetData(*TM.getTargetData()));
-    
-    // Turn the machine code intermediate representation into bytes in memory
-    // that may be executed.
-    if (TM.addPassesToEmitMachineCode(PM, *JCE, CodeGenOpt::Default)) {
-      llvm_report_error("Target does not support machine code emission!");
-    }
-    
-    // Initialize passes.
-    PM.doInitialization();
-  }    
-}
-
-/// materializeFunction - make sure the given function is fully read.  If the
-/// module is corrupt, this returns true and fills in the optional string with
-/// information about the problem.  If successful, this returns false.
-bool JIT::materializeFunction(Function *F, std::string *ErrInfo) {
-  // Read in the function if it exists in this Module.
-  if (F->hasNotBeenReadFromBitcode()) {
-    // Determine the module provider this function is provided by.
-    Module *M = F->getParent();
-    ModuleProvider *MP = 0;
-    for (unsigned i = 0, e = Modules.size(); i != e; ++i) {
-      if (Modules[i]->getModule() == M) {
-        MP = Modules[i];
-        break;
-      }
-    }
-    if (MP)
-      return MP->materializeFunction(F, ErrInfo);
-
-    if (ErrInfo)
-      *ErrInfo = "Function isn't in a module we know about!";
-    return true;
-  }
-  // Succeed if the function is already read.
-  return false;
-}
-
 /// run - Start execution with the specified function and arguments.
 ///
 GenericValue JIT::runFunction(Function *F,
@@ -554,8 +554,12 @@ GenericValue JIT::runFunction(Function *F,
   else
     ReturnInst::Create(F->getContext(), StubBB);           // Just return void.
 
-  // Finally, return the value returned by our nullary stub function.
-  return runFunction(Stub, std::vector<GenericValue>());
+  // Finally, call our nullary stub function.
+  GenericValue Result = runFunction(Stub, std::vector<GenericValue>());
+  // Erase it, since no other function can have a reference to it.
+  Stub->eraseFromParent();
+  // And return the result.
+  return Result;
 }
 
 void JIT::RegisterJITEventListener(JITEventListener *L) {
@@ -621,7 +625,6 @@ void JIT::runJITOnFunction(Function *F, MachineCodeInfo *MCI) {
 }
 
 void JIT::runJITOnFunctionUnlocked(Function *F, const MutexGuard &locked) {
-  static bool isAlreadyCodeGenerating = false;
   assert(!isAlreadyCodeGenerating && "Error: Recursive compilation detected!");
 
   // JIT the function
@@ -662,7 +665,7 @@ void *JIT::getPointerToFunction(Function *F) {
   // Now that this thread owns the lock, make sure we read in the function if it
   // exists in this Module.
   std::string ErrorMsg;
-  if (materializeFunction(F, &ErrorMsg)) {
+  if (F->Materialize(&ErrorMsg)) {
     llvm_report_error("Error reading function '" + F->getName()+
                       "' from bitcode file: " + ErrorMsg);
   }
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
index b6f74ff..edae719 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JIT.h
@@ -30,20 +30,20 @@ class TargetMachine;
 class JITState {
 private:
   FunctionPassManager PM;  // Passes to compile a function
-  ModuleProvider *MP;      // ModuleProvider used to create the PM
+  Module *M;               // Module used to create the PM
 
   /// PendingFunctions - Functions which have not been code generated yet, but
   /// were called from a function being code generated.
   std::vector<AssertingVH<Function> > PendingFunctions;
 
 public:
-  explicit JITState(ModuleProvider *MP) : PM(MP), MP(MP) {}
+  explicit JITState(Module *M) : PM(M), M(M) {}
 
   FunctionPassManager &getPM(const MutexGuard &L) {
     return PM;
   }
   
-  ModuleProvider *getMP() const { return MP; }
+  Module *getModule() const { return M; }
   std::vector<AssertingVH<Function> > &getPendingFunctions(const MutexGuard &L){
     return PendingFunctions;
   }
@@ -61,16 +61,20 @@ class JIT : public ExecutionEngine {
   /// should be set to true.  Doing so breaks freeMachineCodeForFunction.
   bool AllocateGVsWithCode;
 
+  /// True while the JIT is generating code.  Used to assert against recursive
+  /// entry.
+  bool isAlreadyCodeGenerating;
+
   JITState *jitstate;
 
-  JIT(ModuleProvider *MP, TargetMachine &tm, TargetJITInfo &tji,
+  JIT(Module *M, TargetMachine &tm, TargetJITInfo &tji,
       JITMemoryManager *JMM, CodeGenOpt::Level OptLevel,
       bool AllocateGVsWithCode);
 public:
   ~JIT();
 
   static void Register() {
-    JITCtor = create;
+    JITCtor = createJIT;
   }
   
   /// getJITInfo - Return the target JIT information structure.
@@ -80,35 +84,22 @@ public:
   /// create - Create an return a new JIT compiler if there is one available
   /// for the current target.  Otherwise, return null.
   ///
-  static ExecutionEngine *create(ModuleProvider *MP,
+  static ExecutionEngine *create(Module *M,
                                  std::string *Err,
                                  JITMemoryManager *JMM,
                                  CodeGenOpt::Level OptLevel =
                                    CodeGenOpt::Default,
                                  bool GVsWithCode = true,
 				 CodeModel::Model CMM = CodeModel::Default) {
-    return ExecutionEngine::createJIT(MP, Err, JMM, OptLevel, GVsWithCode,
+    return ExecutionEngine::createJIT(M, Err, JMM, OptLevel, GVsWithCode,
 				      CMM);
   }
 
-  virtual void addModuleProvider(ModuleProvider *MP);
+  virtual void addModule(Module *M);
   
-  /// removeModuleProvider - Remove a ModuleProvider from the list of modules.
-  /// Relases the Module from the ModuleProvider, materializing it in the
-  /// process, and returns the materialized Module.
-  virtual Module *removeModuleProvider(ModuleProvider *MP,
-                                       std::string *ErrInfo = 0);
-
-  /// deleteModuleProvider - Remove a ModuleProvider from the list of modules,
-  /// and deletes the ModuleProvider and owned Module.  Avoids materializing 
-  /// the underlying module.
-  virtual void deleteModuleProvider(ModuleProvider *P,std::string *ErrInfo = 0);
-
-  /// materializeFunction - make sure the given function is fully read.  If the
-  /// module is corrupt, this returns true and fills in the optional string with
-  /// information about the problem.  If successful, this returns false.
-  ///
-  bool materializeFunction(Function *F, std::string *ErrInfo = 0);
+  /// removeModule - Remove a Module from the list of modules.  Returns true if
+  /// M is found.
+  virtual bool removeModule(Module *M);
 
   /// runFunction - Start execution with the specified function and arguments.
   ///
@@ -177,14 +168,21 @@ public:
 
   /// selectTarget - Pick a target either via -march or by guessing the native
   /// arch.  Add any CPU features specified via -mcpu or -mattr.
-  static TargetMachine *selectTarget(ModuleProvider *MP, std::string *Err);
+  static TargetMachine *selectTarget(Module *M,
+                                     StringRef MArch,
+                                     StringRef MCPU,
+                                     const SmallVectorImpl<std::string>& MAttrs,
+                                     std::string *Err);
 
-  static ExecutionEngine *createJIT(ModuleProvider *MP,
+  static ExecutionEngine *createJIT(Module *M,
                                     std::string *ErrorStr,
                                     JITMemoryManager *JMM,
                                     CodeGenOpt::Level OptLevel,
                                     bool GVsWithCode,
-				    CodeModel::Model CMM);
+                                    CodeModel::Model CMM,
+                                    StringRef MArch,
+                                    StringRef MCPU,
+                                    const SmallVectorImpl<std::string>& MAttrs);
 
   // Run the JIT on F and return information about the generated code
   void runJITOnFunction(Function *F, MachineCodeInfo *MCI = 0);
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
index 4d58574..57c4375 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/JITEmitter.cpp
@@ -37,6 +37,7 @@
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/MutexGuard.h"
 #include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/raw_ostream.h"
@@ -57,13 +58,12 @@ using namespace llvm;
 STATISTIC(NumBytes, "Number of bytes of machine code compiled");
 STATISTIC(NumRelos, "Number of relocations applied");
 STATISTIC(NumRetries, "Number of retries with more memory");
-static JIT *TheJIT = 0;
 
 
 // A declaration may stop being a declaration once it's fully read from bitcode.
 // This function returns true if F is fully read and is still a declaration.
 static bool isNonGhostDeclaration(const Function *F) {
-  return F->isDeclaration() && !F->hasNotBeenReadFromBitcode();
+  return F->isDeclaration() && !F->isMaterializable();
 }
 
 //===----------------------------------------------------------------------===//
@@ -109,9 +109,13 @@ namespace {
     /// particular GlobalVariable so that we can reuse them if necessary.
     GlobalToIndirectSymMapTy GlobalToIndirectSymMap;
 
+    /// Instance of the JIT this ResolverState serves.
+    JIT *TheJIT;
+
   public:
-    JITResolverState() : FunctionToLazyStubMap(this),
-                         FunctionToCallSitesMap(this) {}
+    JITResolverState(JIT *jit) : FunctionToLazyStubMap(this),
+                                 FunctionToCallSitesMap(this),
+                                 TheJIT(jit) {}
 
     FunctionToLazyStubMapTy& getFunctionToLazyStubMap(
       const MutexGuard& locked) {
@@ -227,18 +231,13 @@ namespace {
 
     JITEmitter &JE;
 
-    static JITResolver *TheJITResolver;
-  public:
-    explicit JITResolver(JIT &jit, JITEmitter &je) : nextGOTIndex(0), JE(je) {
-      TheJIT = &jit;
+    /// Instance of JIT corresponding to this Resolver.
+    JIT *TheJIT;
 
+  public:
+    explicit JITResolver(JIT &jit, JITEmitter &je)
+      : state(&jit), nextGOTIndex(0), JE(je), TheJIT(&jit) {
       LazyResolverFn = jit.getJITInfo().getLazyResolverFunction(JITCompilerFn);
-      assert(TheJITResolver == 0 && "Multiple JIT resolvers?");
-      TheJITResolver = this;
-    }
-
-    ~JITResolver() {
-      TheJITResolver = 0;
     }
 
     /// getLazyFunctionStubIfAvailable - This returns a pointer to a function's
@@ -273,6 +272,44 @@ namespace {
     static void *JITCompilerFn(void *Stub);
   };
 
+  class StubToResolverMapTy {
+    /// Map a stub address to a specific instance of a JITResolver so that
+    /// lazily-compiled functions can find the right resolver to use.
+    ///
+    /// Guarded by Lock.
+    std::map<void*, JITResolver*> Map;
+
+    /// Guards Map from concurrent accesses.
+    mutable sys::Mutex Lock;
+
+  public:
+    /// Registers a Stub to be resolved by Resolver.
+    void RegisterStubResolver(void *Stub, JITResolver *Resolver) {
+      MutexGuard guard(Lock);
+      Map.insert(std::make_pair(Stub, Resolver));
+    }
+    /// Unregisters the Stub when it's invalidated.
+    void UnregisterStubResolver(void *Stub) {
+      MutexGuard guard(Lock);
+      Map.erase(Stub);
+    }
+    /// Returns the JITResolver instance that owns the Stub.
+    JITResolver *getResolverFromStub(void *Stub) const {
+      MutexGuard guard(Lock);
+      // The address given to us for the stub may not be exactly right, it might
+      // be a little bit after the stub.  As such, use upper_bound to find it.
+      // This is the same trick as in LookupFunctionFromCallSite from
+      // JITResolverState.
+      std::map<void*, JITResolver*>::const_iterator I = Map.upper_bound(Stub);
+      assert(I != Map.begin() && "This is not a known stub!");
+      --I;
+      return I->second;
+    }
+  };
+  /// This needs to be static so that a lazy call stub can access it with no
+  /// context except the address of the stub.
+  ManagedStatic<StubToResolverMapTy> StubToResolverMap;
+
   /// JITEmitter - The JIT implementation of the MachineCodeEmitter, which is
   /// used to output functions to memory for execution.
   class JITEmitter : public JITCodeEmitter {
@@ -371,10 +408,13 @@ namespace {
 
     DILocation PrevDLT;
 
+    /// Instance of the JIT
+    JIT *TheJIT;
+
   public:
     JITEmitter(JIT &jit, JITMemoryManager *JMM, TargetMachine &TM)
       : SizeEstimate(0), Resolver(jit, *this), MMI(0), CurFn(0),
-        EmittedFunctions(this), PrevDLT(NULL) {
+        EmittedFunctions(this), PrevDLT(NULL), TheJIT(&jit) {
       MemMgr = JMM ? JMM : JITMemoryManager::CreateDefaultMemManager();
       if (jit.getJITInfo().needsGOT()) {
         MemMgr->AllocateGOT();
@@ -495,8 +535,6 @@ namespace {
   };
 }
 
-JITResolver *JITResolver::TheJITResolver = 0;
-
 void CallSiteValueMapConfig::onDelete(JITResolverState *JRS, Function *F) {
   JRS->EraseAllCallSitesPrelocked(F);
 }
@@ -551,6 +589,10 @@ void *JITResolver::getLazyFunctionStub(Function *F) {
   DEBUG(dbgs() << "JIT: Lazy stub emitted at [" << Stub << "] for function '"
         << F->getName() << "'\n");
 
+  // Register this JITResolver as the one corresponding to this call site so
+  // JITCompilerFn will be able to find it.
+  StubToResolverMap->RegisterStubResolver(Stub, this);
+
   // Finally, keep track of the stub-to-Function mapping so that the
   // JITCompilerFn knows which function to compile!
   state.AddCallSite(locked, Stub, F);
@@ -637,6 +679,9 @@ void JITResolver::getRelocatableGVs(SmallVectorImpl<GlobalValue*> &GVs,
 GlobalValue *JITResolver::invalidateStub(void *Stub) {
   MutexGuard locked(TheJIT->lock);
 
+  // Remove the stub from the StubToResolverMap.
+  StubToResolverMap->UnregisterStubResolver(Stub);
+
   GlobalToIndirectSymMapTy &GM = state.getGlobalToIndirectSymMap(locked);
 
   // Look up the cheap way first, to see if it's a function stub we are
@@ -671,7 +716,8 @@ GlobalValue *JITResolver::invalidateStub(void *Stub) {
 /// been entered.  It looks up which function this stub corresponds to, compiles
 /// it if necessary, then returns the resultant function pointer.
 void *JITResolver::JITCompilerFn(void *Stub) {
-  JITResolver &JR = *TheJITResolver;
+  JITResolver *JR = StubToResolverMap->getResolverFromStub(Stub);
+  assert(JR && "Unable to find the corresponding JITResolver to the call site");
 
   Function* F = 0;
   void* ActualPtr = 0;
@@ -680,24 +726,24 @@ void *JITResolver::JITCompilerFn(void *Stub) {
     // Only lock for getting the Function. The call getPointerToFunction made
     // in this function might trigger function materializing, which requires
     // JIT lock to be unlocked.
-    MutexGuard locked(TheJIT->lock);
+    MutexGuard locked(JR->TheJIT->lock);
 
     // The address given to us for the stub may not be exactly right, it might
     // be a little bit after the stub.  As such, use upper_bound to find it.
     pair<void*, Function*> I =
-      JR.state.LookupFunctionFromCallSite(locked, Stub);
+      JR->state.LookupFunctionFromCallSite(locked, Stub);
     F = I.second;
     ActualPtr = I.first;
   }
 
   // If we have already code generated the function, just return the address.
-  void *Result = TheJIT->getPointerToGlobalIfAvailable(F);
+  void *Result = JR->TheJIT->getPointerToGlobalIfAvailable(F);
 
   if (!Result) {
     // Otherwise we don't have it, do lazy compilation now.
 
     // If lazy compilation is disabled, emit a useful error message and abort.
-    if (!TheJIT->isCompilingLazily()) {
+    if (!JR->TheJIT->isCompilingLazily()) {
       llvm_report_error("LLVM JIT requested to do lazy compilation of function '"
                         + F->getName() + "' when lazy compiles are disabled!");
     }
@@ -706,11 +752,11 @@ void *JITResolver::JITCompilerFn(void *Stub) {
           << "' In stub ptr = " << Stub << " actual ptr = "
           << ActualPtr << "\n");
 
-    Result = TheJIT->getPointerToFunction(F);
+    Result = JR->TheJIT->getPointerToFunction(F);
   }
 
   // Reacquire the lock to update the GOT map.
-  MutexGuard locked(TheJIT->lock);
+  MutexGuard locked(JR->TheJIT->lock);
 
   // We might like to remove the call site from the CallSiteToFunction map, but
   // we can't do that! Multiple threads could be stuck, waiting to acquire the
@@ -725,8 +771,8 @@ void *JITResolver::JITCompilerFn(void *Stub) {
   // if they see it still using the stub address.
   // Note: this is done so the Resolver doesn't have to manage GOT memory
   // Do this without allocating map space if the target isn't using a GOT
-  if(JR.revGOTMap.find(Stub) != JR.revGOTMap.end())
-    JR.revGOTMap[Result] = JR.revGOTMap[Stub];
+  if(JR->revGOTMap.find(Stub) != JR->revGOTMap.end())
+    JR->revGOTMap[Result] = JR->revGOTMap[Stub];
 
   return Result;
 }
@@ -839,7 +885,7 @@ static unsigned GetConstantPoolSizeInBytes(MachineConstantPool *MCP,
   return Size;
 }
 
-static unsigned GetJumpTableSizeInBytes(MachineJumpTableInfo *MJTI) {
+static unsigned GetJumpTableSizeInBytes(MachineJumpTableInfo *MJTI, JIT *jit) {
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   if (JT.empty()) return 0;
 
@@ -847,7 +893,7 @@ static unsigned GetJumpTableSizeInBytes(MachineJumpTableInfo *MJTI) {
   for (unsigned i = 0, e = JT.size(); i != e; ++i)
     NumEntries += JT[i].MBBs.size();
 
-  return NumEntries * MJTI->getEntrySize(*TheJIT->getTargetData());
+  return NumEntries * MJTI->getEntrySize(*jit->getTargetData());
 }
 
 static uintptr_t RoundUpToAlign(uintptr_t Size, unsigned Alignment) {
@@ -1032,7 +1078,7 @@ void JITEmitter::startFunction(MachineFunction &F) {
                              MJTI->getEntryAlignment(*TheJIT->getTargetData()));
 
       // Add the jump table size
-      ActualSize += GetJumpTableSizeInBytes(MJTI);
+      ActualSize += GetJumpTableSizeInBytes(MJTI, TheJIT);
     }
 
     // Add the alignment for the function
@@ -1552,19 +1598,6 @@ JITCodeEmitter *JIT::createEmitter(JIT &jit, JITMemoryManager *JMM,
   return new JITEmitter(jit, JMM, tm);
 }
 
-// getPointerToNamedFunction - This function is used as a global wrapper to
-// JIT::getPointerToNamedFunction for the purpose of resolving symbols when
-// bugpoint is debugging the JIT. In that scenario, we are loading an .so and
-// need to resolve function(s) that are being mis-codegenerated, so we need to
-// resolve their addresses at runtime, and this is the way to do it.
-extern "C" {
-  void *getPointerToNamedFunction(const char *Name) {
-    if (Function *F = TheJIT->FindFunctionNamed(Name))
-      return TheJIT->getPointerToFunction(F);
-    return TheJIT->getPointerToNamedFunction(Name);
-  }
-}
-
 // getPointerToFunctionOrStub - If the specified function has been
 // code-gen'd, return a pointer to the function.  If not, compile it, or use
 // a stub to implement lazy compilation if available.
diff --git a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/TargetSelect.cpp b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/TargetSelect.cpp
index 8bed33b..3349c33 100644
--- a/libclamav/c++/llvm/lib/ExecutionEngine/JIT/TargetSelect.cpp
+++ b/libclamav/c++/llvm/lib/ExecutionEngine/JIT/TargetSelect.cpp
@@ -15,7 +15,6 @@
 
 #include "JIT.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/ADT/Triple.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/raw_ostream.h"
@@ -25,28 +24,14 @@
 #include "llvm/Target/TargetRegistry.h"
 using namespace llvm;
 
-static cl::opt<std::string>
-MArch("march",
-      cl::desc("Architecture to generate assembly for (see --version)"));
-
-static cl::opt<std::string>
-MCPU("mcpu",
-  cl::desc("Target a specific cpu type (-mcpu=help for details)"),
-  cl::value_desc("cpu-name"),
-  cl::init(""));
-
-static cl::list<std::string>
-MAttrs("mattr",
-  cl::CommaSeparated,
-  cl::desc("Target specific attributes (-mattr=help for details)"),
-  cl::value_desc("a1,+a2,-a3,..."));
-
 /// selectTarget - Pick a target either via -march or by guessing the native
 /// arch.  Add any CPU features specified via -mcpu or -mattr.
-TargetMachine *JIT::selectTarget(ModuleProvider *MP, std::string *ErrorStr) {
-  Module &Mod = *MP->getModule();
-
-  Triple TheTriple(Mod.getTargetTriple());
+TargetMachine *JIT::selectTarget(Module *Mod,
+                                 StringRef MArch,
+                                 StringRef MCPU,
+                                 const SmallVectorImpl<std::string>& MAttrs,
+                                 std::string *ErrorStr) {
+  Triple TheTriple(Mod->getTargetTriple());
   if (TheTriple.getTriple().empty())
     TheTriple.setTriple(sys::getHostTriple());
 
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp b/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
index 796dcc4..f3f063f 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmInfo.cpp
@@ -22,11 +22,10 @@ MCAsmInfo::MCAsmInfo() {
   HasSubsectionsViaSymbols = false;
   HasMachoZeroFillDirective = false;
   HasStaticCtorDtorReferenceInStaticMode = false;
-  NeedsSet = false;
   MaxInstLength = 4;
   PCSymbol = "$";
   SeparatorChar = ';';
-  CommentColumn = 60;
+  CommentColumn = 40;
   CommentString = "#";
   GlobalPrefix = "";
   PrivateGlobalPrefix = ".";
@@ -50,8 +49,9 @@ MCAsmInfo::MCAsmInfo() {
   TextAlignFillValue = 0;
   GPRel32Directive = 0;
   GlobalDirective = "\t.globl\t";
-  SetDirective = 0;
+  HasSetDirective = true;
   HasLCOMMDirective = false;
+  COMMDirectiveAlignmentIsInBytes = true;
   HasDotTypeDotSizeDirective = true;
   HasSingleParameterDotFile = true;
   HasNoDeadStrip = false;
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmInfoCOFF.cpp b/libclamav/c++/llvm/lib/MC/MCAsmInfoCOFF.cpp
index e6b79dd..9130493 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmInfoCOFF.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmInfoCOFF.cpp
@@ -18,13 +18,13 @@ using namespace llvm;
 
 MCAsmInfoCOFF::MCAsmInfoCOFF() {
   GlobalPrefix = "_";
+  COMMDirectiveAlignmentIsInBytes = false;
   HasLCOMMDirective = true;
   HasDotTypeDotSizeDirective = false;
   HasSingleParameterDotFile = false;
   PrivateGlobalPrefix = "L";  // Prefix for private global symbols
   WeakRefDirective = "\t.weak\t";
-  LinkOnceDirective = "\t.linkonce same_size\n";
-  SetDirective = "\t.set\t";
+  LinkOnceDirective = "\t.linkonce discard\n";
   
   // Doesn't support visibility:
   HiddenVisibilityAttr = ProtectedVisibilityAttr = MCSA_Invalid;
@@ -36,4 +36,3 @@ MCAsmInfoCOFF::MCAsmInfoCOFF() {
   SupportsDebugInformation = true;
   DwarfSectionOffsetDirective = "\t.secrel32\t";
 }
-
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmInfoDarwin.cpp b/libclamav/c++/llvm/lib/MC/MCAsmInfoDarwin.cpp
index 9902f50..da865ad 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmInfoDarwin.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmInfoDarwin.cpp
@@ -21,12 +21,12 @@ MCAsmInfoDarwin::MCAsmInfoDarwin() {
   GlobalPrefix = "_";
   PrivateGlobalPrefix = "L";
   LinkerPrivateGlobalPrefix = "l";
-  NeedsSet = true;
   AllowQuotesInName = true;
   HasSingleParameterDotFile = false;
   HasSubsectionsViaSymbols = true;
 
   AlignmentIsInBytes = false;
+  COMMDirectiveAlignmentIsInBytes = false;
   InlineAsmStart = " InlineAsm Start";
   InlineAsmEnd = " InlineAsm End";
 
@@ -36,7 +36,6 @@ MCAsmInfoDarwin::MCAsmInfoDarwin() {
   ZeroDirective = "\t.space\t";  // ".space N" emits N zeros.
   HasMachoZeroFillDirective = true;  // Uses .zerofill
   HasStaticCtorDtorReferenceInStaticMode = true;
-  SetDirective = "\t.set";
   
   HiddenVisibilityAttr = MCSA_PrivateExtern;
   // Doesn't support protected visibility.
diff --git a/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp b/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
index d177f95..6add1b4 100644
--- a/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAsmStreamer.cpp
@@ -29,25 +29,32 @@ namespace {
 class MCAsmStreamer : public MCStreamer {
   formatted_raw_ostream &OS;
   const MCAsmInfo &MAI;
-  bool IsLittleEndian, IsVerboseAsm;
   MCInstPrinter *InstPrinter;
   MCCodeEmitter *Emitter;
   
   SmallString<128> CommentToEmit;
   raw_svector_ostream CommentStream;
+
+  unsigned IsLittleEndian : 1;
+  unsigned IsVerboseAsm : 1;
+  unsigned ShowInst : 1;
+
 public:
   MCAsmStreamer(MCContext &Context, formatted_raw_ostream &os,
                 const MCAsmInfo &mai,
                 bool isLittleEndian, bool isVerboseAsm, MCInstPrinter *printer,
-                MCCodeEmitter *emitter)
-    : MCStreamer(Context), OS(os), MAI(mai), IsLittleEndian(isLittleEndian),
-      IsVerboseAsm(isVerboseAsm), InstPrinter(printer), Emitter(emitter),
-      CommentStream(CommentToEmit) {}
+                MCCodeEmitter *emitter, bool showInst)
+    : MCStreamer(Context), OS(os), MAI(mai), InstPrinter(printer),
+      Emitter(emitter), CommentStream(CommentToEmit),
+      IsLittleEndian(isLittleEndian), IsVerboseAsm(isVerboseAsm),
+      ShowInst(showInst) {
+    if (InstPrinter && IsVerboseAsm)
+      InstPrinter->setCommentStream(CommentStream);
+  }
   ~MCAsmStreamer() {}
 
   bool isLittleEndian() const { return IsLittleEndian; }
-  
-  
+
   inline void EmitEOL() {
     // If we don't have any comments, just emit a \n.
     if (!IsVerboseAsm) {
@@ -57,13 +64,20 @@ public:
     EmitCommentsAndEOL();
   }
   void EmitCommentsAndEOL();
-  
+
+  /// isVerboseAsm - Return true if this streamer supports verbose assembly at
+  /// all.
+  virtual bool isVerboseAsm() const { return IsVerboseAsm; }
+
   /// AddComment - Add a comment that can be emitted to the generated .s
   /// file if applicable as a QoI issue to make the output of the compiler
   /// more readable.  This only affects the MCAsmStreamer, and only when
   /// verbose assembly output is enabled.
   virtual void AddComment(const Twine &T);
-  
+
+  /// AddEncodingComment - Add a comment showing the encoding of an instruction.
+  virtual void AddEncodingComment(const MCInst &Inst);
+
   /// GetCommentOS - Return a raw_ostream that comments can be written to.
   /// Unlike AddComment, you are required to terminate comments with \n if you
   /// use this method.
@@ -72,12 +86,12 @@ public:
       return nulls();  // Discard comments unless in verbose asm mode.
     return CommentStream;
   }
-  
+
   /// AddBlankLine - Emit a blank line to a .s file to pretty it up.
   virtual void AddBlankLine() {
     EmitEOL();
   }
-  
+
   /// @name MCStreamer Interface
   /// @{
 
@@ -233,7 +247,7 @@ void MCAsmStreamer::EmitSymbolAttribute(MCSymbol *Symbol,
   case MCSA_ELF_TypeCommon:      /// .type _foo, STT_COMMON  # aka @common
   case MCSA_ELF_TypeNoType:      /// .type _foo, STT_NOTYPE  # aka @notype
     assert(MAI.hasDotTypeDotSizeDirective() && "Symbol Attr not supported");
-    OS << "\t.type " << *Symbol << ','
+    OS << "\t.type\t" << *Symbol << ','
        << ((MAI.getCommentString()[0] != '@') ? '@' : '%');
     switch (Attribute) {
     default: assert(0 && "Unknown ELF .type");
@@ -282,7 +296,7 @@ void MCAsmStreamer::EmitCommonSymbol(MCSymbol *Symbol, uint64_t Size,
                                      unsigned ByteAlignment) {
   OS << "\t.comm\t" << *Symbol << ',' << Size;
   if (ByteAlignment != 0) {
-    if (MAI.getAlignmentIsInBytes())
+    if (MAI.getCOMMDirectiveAlignmentIsInBytes())
       OS << ',' << ByteAlignment;
     else
       OS << ',' << Log2_32(ByteAlignment);
@@ -520,50 +534,119 @@ void MCAsmStreamer::EmitDwarfFileDirective(unsigned FileNo, StringRef Filename){
   EmitEOL();
 }
 
+void MCAsmStreamer::AddEncodingComment(const MCInst &Inst) {
+  raw_ostream &OS = GetCommentOS();
+  SmallString<256> Code;
+  SmallVector<MCFixup, 4> Fixups;
+  raw_svector_ostream VecOS(Code);
+  Emitter->EncodeInstruction(Inst, VecOS, Fixups);
+  VecOS.flush();
+
+  // If we are showing fixups, create symbolic markers in the encoded
+  // representation. We do this by making a per-bit map to the fixup item index,
+  // then trying to display it as nicely as possible.
+  SmallVector<uint8_t, 64> FixupMap;
+  FixupMap.resize(Code.size() * 8);
+  for (unsigned i = 0, e = Code.size() * 8; i != e; ++i)
+    FixupMap[i] = 0;
+
+  for (unsigned i = 0, e = Fixups.size(); i != e; ++i) {
+    MCFixup &F = Fixups[i];
+    const MCFixupKindInfo &Info = Emitter->getFixupKindInfo(F.getKind());
+    for (unsigned j = 0; j != Info.TargetSize; ++j) {
+      unsigned Index = F.getOffset() * 8 + Info.TargetOffset + j;
+      assert(Index < Code.size() * 8 && "Invalid offset in fixup!");
+      FixupMap[Index] = 1 + i;
+    }
+  }
 
-void MCAsmStreamer::EmitInstruction(const MCInst &Inst) {
-  assert(CurSection && "Cannot emit contents before setting section!");
+  OS << "encoding: [";
+  for (unsigned i = 0, e = Code.size(); i != e; ++i) {
+    if (i)
+      OS << ',';
 
-  // If we have an AsmPrinter, use that to print.
-  if (InstPrinter) {
-    InstPrinter->printInst(&Inst);
-    EmitEOL();
+    // See if all bits are the same map entry.
+    uint8_t MapEntry = FixupMap[i * 8 + 0];
+    for (unsigned j = 1; j != 8; ++j) {
+      if (FixupMap[i * 8 + j] == MapEntry)
+        continue;
 
-    // Show the encoding if we have a code emitter.
-    if (Emitter) {
-      SmallString<256> Code;
-      raw_svector_ostream VecOS(Code);
-      Emitter->EncodeInstruction(Inst, VecOS);
-      VecOS.flush();
-  
-      OS.indent(20);
-      OS << " # encoding: [";
-      for (unsigned i = 0, e = Code.size(); i != e; ++i) {
-        if (i)
-          OS << ',';
-        OS << format("%#04x", uint8_t(Code[i]));
+      MapEntry = uint8_t(~0U);
+      break;
+    }
+
+    if (MapEntry != uint8_t(~0U)) {
+      if (MapEntry == 0) {
+        OS << format("0x%02x", uint8_t(Code[i]));
+      } else {
+        assert(Code[i] == 0 && "Encoder wrote into fixed up bit!");
+        OS << char('A' + MapEntry - 1);
+      }
+    } else {
+      // Otherwise, write out in binary.
+      OS << "0b";
+      for (unsigned j = 8; j--;) {
+        unsigned Bit = (Code[i] >> j) & 1;
+        if (uint8_t MapEntry = FixupMap[i * 8 + j]) {
+          assert(Bit == 0 && "Encoder wrote into fixed up bit!");
+          OS << char('A' + MapEntry - 1);
+        } else
+          OS << Bit;
       }
-      OS << "]\n";
     }
+  }
+  OS << "]\n";
 
-    return;
+  for (unsigned i = 0, e = Fixups.size(); i != e; ++i) {
+    MCFixup &F = Fixups[i];
+    const MCFixupKindInfo &Info = Emitter->getFixupKindInfo(F.getKind());
+    OS << "  fixup " << char('A' + i) << " - " << "offset: " << F.getOffset()
+       << ", value: " << *F.getValue() << ", kind: " << Info.Name << "\n";
   }
+}
 
-  // Otherwise fall back to a structural printing for now. Eventually we should
-  // always have access to the target specific printer.
-  Inst.print(OS, &MAI);
+void MCAsmStreamer::EmitInstruction(const MCInst &Inst) {
+  assert(CurSection && "Cannot emit contents before setting section!");
+
+  // Show the encoding in a comment if we have a code emitter.
+  if (Emitter)
+    AddEncodingComment(Inst);
+
+  // Show the MCInst if enabled.
+  if (ShowInst) {
+    raw_ostream &OS = GetCommentOS();
+    OS << "<MCInst #" << Inst.getOpcode();
+    
+    StringRef InstName;
+    if (InstPrinter)
+      InstName = InstPrinter->getOpcodeName(Inst.getOpcode());
+    if (!InstName.empty())
+      OS << ' ' << InstName;
+    
+    for (unsigned i = 0, e = Inst.getNumOperands(); i != e; ++i) {
+      OS << "\n  ";
+      Inst.getOperand(i).print(OS, &MAI);
+    }
+    OS << ">\n";
+  }
+  
+  // If we have an AsmPrinter, use that to print, otherwise dump the MCInst.
+  if (InstPrinter)
+    InstPrinter->printInst(&Inst);
+  else
+    Inst.print(OS, &MAI);
   EmitEOL();
 }
 
 void MCAsmStreamer::Finish() {
   OS.flush();
 }
-    
+
 MCStreamer *llvm::createAsmStreamer(MCContext &Context,
                                     formatted_raw_ostream &OS,
                                     const MCAsmInfo &MAI, bool isLittleEndian,
                                     bool isVerboseAsm, MCInstPrinter *IP,
-                                    MCCodeEmitter *CE) {
+                                    MCCodeEmitter *CE, bool ShowInst) {
   return new MCAsmStreamer(Context, OS, MAI, isLittleEndian, isVerboseAsm,
-                           IP, CE);
+                           IP, CE, ShowInst);
 }
diff --git a/libclamav/c++/llvm/lib/MC/MCAssembler.cpp b/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
index f0f5a47..653fbf2 100644
--- a/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCAssembler.cpp
@@ -13,14 +13,20 @@
 #include "llvm/MC/MCSectionMachO.h"
 #include "llvm/MC/MCSymbol.h"
 #include "llvm/MC/MCValue.h"
-#include "llvm/Target/TargetMachOWriterInfo.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringMap.h"
 #include "llvm/ADT/Twine.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/MachO.h"
 #include "llvm/Support/raw_ostream.h"
+#include "llvm/Support/Debug.h"
+
+// FIXME: Gross.
+#include "../Target/X86/X86FixupKinds.h"
+
 #include <vector>
 using namespace llvm;
 
@@ -45,6 +51,30 @@ static bool isVirtualSection(const MCSection &Section) {
   return (Type == MCSectionMachO::S_ZEROFILL);
 }
 
+static unsigned getFixupKindLog2Size(MCFixupKind Kind) {
+  switch (Kind) {
+  default: llvm_unreachable("invalid fixup kind!");
+  case X86::reloc_pcrel_1byte:
+  case FK_Data_1: return 0;
+  case FK_Data_2: return 1;
+  case X86::reloc_pcrel_4byte:
+  case X86::reloc_riprel_4byte:
+  case FK_Data_4: return 2;
+  case FK_Data_8: return 3;
+  }
+}
+
+static bool isFixupKindPCRel(MCFixupKind Kind) {
+  switch (Kind) {
+  default:
+    return false;
+  case X86::reloc_pcrel_1byte:
+  case X86::reloc_pcrel_4byte:
+  case X86::reloc_riprel_4byte:
+    return true;
+  }
+}
+
 class MachObjectWriter {
   // See <mach-o/loader.h>.
   enum {
@@ -203,9 +233,9 @@ public:
     Write32(Header_Magic32);
 
     // FIXME: Support cputype.
-    Write32(TargetMachOWriterInfo::HDR_CPU_TYPE_I386);
+    Write32(MachO::CPUTypeI386);
     // FIXME: Support cpusubtype.
-    Write32(TargetMachOWriterInfo::HDR_CPU_SUBTYPE_I386_ALL);
+    Write32(MachO::CPUSubType_I386_ALL);
     Write32(HFT_Object);
     Write32(NumLoadCommands);    // Object files have a single load command, the
                                  // segment.
@@ -266,11 +296,15 @@ public:
     Write32(SD.getSize()); // size
     Write32(FileOffset);
 
+    unsigned Flags = Section.getTypeAndAttributes();
+    if (SD.hasInstructions())
+      Flags |= MCSectionMachO::S_ATTR_SOME_INSTRUCTIONS;
+
     assert(isPowerOf2_32(SD.getAlignment()) && "Invalid alignment!");
     Write32(Log2_32(SD.getAlignment()));
     Write32(NumRelocations ? RelocationsStart : 0);
     Write32(NumRelocations);
-    Write32(Section.getTypeAndAttributes());
+    Write32(Flags);
     Write32(0); // reserved1
     Write32(Section.getStubSize()); // reserved2
 
@@ -398,12 +432,12 @@ public:
     uint32_t Word0;
     uint32_t Word1;
   };
-  void ComputeScatteredRelocationInfo(MCAssembler &Asm,
-                                      MCSectionData::Fixup &Fixup,
+  void ComputeScatteredRelocationInfo(MCAssembler &Asm, MCFragment &Fragment,
+                                      MCAsmFixup &Fixup,
                                       const MCValue &Target,
                              DenseMap<const MCSymbol*,MCSymbolData*> &SymbolMap,
                                      std::vector<MachRelocationEntry> &Relocs) {
-    uint32_t Address = Fixup.Fragment->getOffset() + Fixup.Offset;
+    uint32_t Address = Fragment.getOffset() + Fixup.Offset;
     unsigned IsPCRel = 0;
     unsigned Type = RIT_Vanilla;
 
@@ -420,11 +454,14 @@ public:
       Value2 = SD->getFragment()->getAddress() + SD->getOffset();
     }
 
-    unsigned Log2Size = Log2_32(Fixup.Size);
-    assert((1U << Log2Size) == Fixup.Size && "Invalid fixup size!");
+    unsigned Log2Size = getFixupKindLog2Size(Fixup.Kind);
 
     // The value which goes in the fixup is current value of the expression.
     Fixup.FixedValue = Value - Value2 + Target.getConstant();
+    if (isFixupKindPCRel(Fixup.Kind)) {
+      Fixup.FixedValue -= Address + (1 << Log2Size);
+      IsPCRel = 1;
+    }
 
     MachRelocationEntry MRE;
     MRE.Word0 = ((Address   <<  0) |
@@ -449,8 +486,8 @@ public:
     }
   }
 
-  void ComputeRelocationInfo(MCAssembler &Asm,
-                             MCSectionData::Fixup &Fixup,
+  void ComputeRelocationInfo(MCAssembler &Asm, MCDataFragment &Fragment,
+                             MCAsmFixup &Fixup,
                              DenseMap<const MCSymbol*,MCSymbolData*> &SymbolMap,
                              std::vector<MachRelocationEntry> &Relocs) {
     MCValue Target;
@@ -462,11 +499,11 @@ public:
     if (Target.getSymB() ||
         (Target.getSymA() && !Target.getSymA()->isUndefined() &&
          Target.getConstant()))
-      return ComputeScatteredRelocationInfo(Asm, Fixup, Target,
+      return ComputeScatteredRelocationInfo(Asm, Fragment, Fixup, Target,
                                             SymbolMap, Relocs);
 
     // See <reloc.h>.
-    uint32_t Address = Fixup.Fragment->getOffset() + Fixup.Offset;
+    uint32_t Address = Fragment.getOffset() + Fixup.Offset;
     uint32_t Value = 0;
     unsigned Index = 0;
     unsigned IsPCRel = 0;
@@ -475,6 +512,8 @@ public:
 
     if (Target.isAbsolute()) { // constant
       // SymbolNum of 0 indicates the absolute section.
+      //
+      // FIXME: When is this generated?
       Type = RIT_Vanilla;
       Value = 0;
       llvm_unreachable("FIXME: Not yet implemented!");
@@ -491,10 +530,11 @@ public:
         //
         // FIXME: O(N)
         Index = 1;
-        for (MCAssembler::iterator it = Asm.begin(),
-               ie = Asm.end(); it != ie; ++it, ++Index)
+        MCAssembler::iterator it = Asm.begin(), ie = Asm.end();
+        for (; it != ie; ++it, ++Index)
           if (&*it == SD->getFragment()->getParent())
             break;
+        assert(it != ie && "Unable to find section index!");
         Value = SD->getFragment()->getAddress() + SD->getOffset();
       }
 
@@ -504,8 +544,12 @@ public:
     // The value which goes in the fixup is current value of the expression.
     Fixup.FixedValue = Value + Target.getConstant();
 
-    unsigned Log2Size = Log2_32(Fixup.Size);
-    assert((1U << Log2Size) == Fixup.Size && "Invalid fixup size!");
+    unsigned Log2Size = getFixupKindLog2Size(Fixup.Kind);
+
+    if (isFixupKindPCRel(Fixup.Kind)) {
+      Fixup.FixedValue -= Address + (1<<Log2Size);
+      IsPCRel = 1;
+    }
 
     // struct relocation_info (8 bytes)
     MachRelocationEntry MRE;
@@ -742,7 +786,7 @@ public:
                                      SD.getAddress() + SD.getFileSize());
     }
 
-    // The section data is passed to 4 bytes.
+    // The section data is padded to 4 bytes.
     //
     // FIXME: Is this machine dependent?
     unsigned SectionDataPadding = OffsetToAlignment(SectionDataFileSize, 4);
@@ -757,22 +801,25 @@ public:
     // ... and then the section headers.
     //
     // We also compute the section relocations while we do this. Note that
-    // compute relocation info will also update the fixup to have the correct
-    // value; this will be overwrite the appropriate data in the fragment when
-    // it is written.
+    // computing relocation info will also update the fixup to have the correct
+    // value; this will overwrite the appropriate data in the fragment when it
+    // is written.
     std::vector<MachRelocationEntry> RelocInfos;
     uint64_t RelocTableEnd = SectionDataStart + SectionDataFileSize;
-    for (MCAssembler::iterator it = Asm.begin(), ie = Asm.end(); it != ie;
-         ++it) {
+    for (MCAssembler::iterator it = Asm.begin(),
+           ie = Asm.end(); it != ie; ++it) {
       MCSectionData &SD = *it;
 
       // The assembler writes relocations in the reverse order they were seen.
       //
       // FIXME: It is probably more complicated than this.
       unsigned NumRelocsStart = RelocInfos.size();
-      for (unsigned i = 0, e = SD.fixup_size(); i != e; ++i)
-        ComputeRelocationInfo(Asm, SD.getFixups()[e - i - 1], SymbolMap,
-                              RelocInfos);
+      for (MCSectionData::reverse_iterator it2 = SD.rbegin(),
+             ie2 = SD.rend(); it2 != ie2; ++it2)
+        if (MCDataFragment *DF = dyn_cast<MCDataFragment>(&*it2))
+          for (unsigned i = 0, e = DF->fixup_size(); i != e; ++i)
+            ComputeRelocationInfo(Asm, *DF, DF->getFixups()[e - i - 1],
+                                  SymbolMap, RelocInfos);
 
       unsigned NumRelocs = RelocInfos.size() - NumRelocsStart;
       uint64_t SectionStart = SectionDataStart + SD.getAddress();
@@ -867,6 +914,16 @@ public:
       OS << StringTable.str();
     }
   }
+
+  void ApplyFixup(const MCAsmFixup &Fixup, MCDataFragment &DF) {
+    unsigned Size = 1 << getFixupKindLog2Size(Fixup.Kind);
+
+    // FIXME: Endianness assumption.
+    assert(Fixup.Offset + Size <= DF.getContents().size() &&
+           "Invalid fixup offset!");
+    for (unsigned i = 0; i != Size; ++i)
+      DF.getContents()[Fixup.Offset + i] = uint8_t(Fixup.FixedValue >> (i * 8));
+  }
 };
 
 /* *** */
@@ -901,34 +958,12 @@ MCSectionData::MCSectionData(const MCSection &_Section, MCAssembler *A)
     Address(~UINT64_C(0)),
     Size(~UINT64_C(0)),
     FileSize(~UINT64_C(0)),
-    LastFixupLookup(~0)
+    HasInstructions(false)
 {
   if (A)
     A->getSectionList().push_back(this);
 }
 
-const MCSectionData::Fixup *
-MCSectionData::LookupFixup(const MCFragment *Fragment, uint64_t Offset) const {
-  // Use a one level cache to turn the common case of accessing the fixups in
-  // order into O(1) instead of O(N).
-  unsigned i = LastFixupLookup, Count = Fixups.size(), End = Fixups.size();
-  if (i >= End)
-    i = 0;
-  while (Count--) {
-    const Fixup &F = Fixups[i];
-    if (F.Fragment == Fragment && F.Offset == Offset) {
-      LastFixupLookup = i;
-      return &F;
-    }
-
-    ++i;
-    if (i == End)
-      i = 0;
-  }
-
-  return 0;
-}
-
 /* *** */
 
 MCSymbolData::MCSymbolData() : Symbol(0) {}
@@ -975,31 +1010,10 @@ void MCAssembler::LayoutSection(MCSectionData &SD) {
     }
 
     case MCFragment::FT_Data:
+    case MCFragment::FT_Fill:
       F.setFileSize(F.getMaxFileSize());
       break;
 
-    case MCFragment::FT_Fill: {
-      MCFillFragment &FF = cast<MCFillFragment>(F);
-
-      F.setFileSize(F.getMaxFileSize());
-
-      MCValue Target;
-      if (!FF.getValue().EvaluateAsRelocatable(Target))
-        llvm_report_error("expected relocatable expression");
-
-      // If the fill value is constant, thats it.
-      if (Target.isAbsolute())
-        break;
-
-      // Otherwise, add fixups for the values.
-      for (uint64_t i = 0, e = FF.getCount(); i != e; ++i) {
-        MCSectionData::Fixup Fix(F, i * FF.getValueSize(),
-                                 FF.getValue(),FF.getValueSize());
-        SD.getFixups().push_back(Fix);
-      }
-      break;
-    }
-
     case MCFragment::FT_Org: {
       MCOrgFragment &OF = cast<MCOrgFragment>(F);
 
@@ -1082,39 +1096,30 @@ static void WriteFileData(raw_ostream &OS, const MCFragment &F,
     break;
   }
 
-  case MCFragment::FT_Data:
+  case MCFragment::FT_Data: {
+    MCDataFragment &DF = cast<MCDataFragment>(F);
+
+    // Apply the fixups.
+    //
+    // FIXME: Move elsewhere.
+    for (MCDataFragment::const_fixup_iterator it = DF.fixup_begin(),
+           ie = DF.fixup_end(); it != ie; ++it)
+      MOW.ApplyFixup(*it, DF);
+
     OS << cast<MCDataFragment>(F).getContents().str();
     break;
+  }
 
   case MCFragment::FT_Fill: {
     MCFillFragment &FF = cast<MCFillFragment>(F);
-
-    int64_t Value = 0;
-
-    MCValue Target;
-    if (!FF.getValue().EvaluateAsRelocatable(Target))
-      llvm_report_error("expected relocatable expression");
-
-    if (Target.isAbsolute())
-      Value = Target.getConstant();
     for (uint64_t i = 0, e = FF.getCount(); i != e; ++i) {
-      if (!Target.isAbsolute()) {
-        // Find the fixup.
-        //
-        // FIXME: Find a better way to write in the fixes.
-        const MCSectionData::Fixup *Fixup =
-          F.getParent()->LookupFixup(&F, i * FF.getValueSize());
-        assert(Fixup && "Missing fixup for fill value!");
-        Value = Fixup->FixedValue;
-      }
-
       switch (FF.getValueSize()) {
       default:
         assert(0 && "Invalid size!");
-      case 1: MOW.Write8 (uint8_t (Value)); break;
-      case 2: MOW.Write16(uint16_t(Value)); break;
-      case 4: MOW.Write32(uint32_t(Value)); break;
-      case 8: MOW.Write64(uint64_t(Value)); break;
+      case 1: MOW.Write8 (uint8_t (FF.getValue())); break;
+      case 2: MOW.Write16(uint16_t(FF.getValue())); break;
+      case 4: MOW.Write32(uint32_t(FF.getValue())); break;
+      case 8: MOW.Write64(uint64_t(FF.getValue())); break;
       }
     }
     break;
@@ -1162,6 +1167,10 @@ static void WriteFileData(raw_ostream &OS, const MCSectionData &SD,
 }
 
 void MCAssembler::Finish() {
+  DEBUG_WITH_TYPE("mc-dump", {
+      llvm::errs() << "assembler backend - pre-layout\n--\n";
+      dump(); });
+
   // Layout the concrete sections and fragments.
   uint64_t Address = 0;
   MCSectionData *Prev = 0;
@@ -1200,9 +1209,149 @@ void MCAssembler::Finish() {
     Address += SD.getSize();
   }
 
+  DEBUG_WITH_TYPE("mc-dump", {
+      llvm::errs() << "assembler backend - post-layout\n--\n";
+      dump(); });
+
   // Write the object file.
   MachObjectWriter MOW(OS);
   MOW.WriteObject(*this);
 
   OS.flush();
 }
+
+
+// Debugging methods
+
+namespace llvm {
+
+raw_ostream &operator<<(raw_ostream &OS, const MCAsmFixup &AF) {
+  OS << "<MCAsmFixup" << " Offset:" << AF.Offset << " Value:" << *AF.Value
+     << " Kind:" << AF.Kind << ">";
+  return OS;
+}
+
+}
+
+void MCFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCFragment " << (void*) this << " Offset:" << Offset
+     << " FileSize:" << FileSize;
+
+  OS << ">";
+}
+
+void MCAlignFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCAlignFragment ";
+  this->MCFragment::dump();
+  OS << "\n       ";
+  OS << " Alignment:" << getAlignment()
+     << " Value:" << getValue() << " ValueSize:" << getValueSize()
+     << " MaxBytesToEmit:" << getMaxBytesToEmit() << ">";
+}
+
+void MCDataFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCDataFragment ";
+  this->MCFragment::dump();
+  OS << "\n       ";
+  OS << " Contents:[";
+  for (unsigned i = 0, e = getContents().size(); i != e; ++i) {
+    if (i) OS << ",";
+    OS << hexdigit((Contents[i] >> 4) & 0xF) << hexdigit(Contents[i] & 0xF);
+  }
+  OS << "] (" << getContents().size() << " bytes)";
+
+  if (!getFixups().empty()) {
+    OS << ",\n       ";
+    OS << " Fixups:[";
+    for (fixup_iterator it = fixup_begin(), ie = fixup_end(); it != ie; ++it) {
+      if (it != fixup_begin()) OS << ",\n            ";
+      OS << *it;
+    }
+    OS << "]";
+  }
+
+  OS << ">";
+}
+
+void MCFillFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCFillFragment ";
+  this->MCFragment::dump();
+  OS << "\n       ";
+  OS << " Value:" << getValue() << " ValueSize:" << getValueSize()
+     << " Count:" << getCount() << ">";
+}
+
+void MCOrgFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCOrgFragment ";
+  this->MCFragment::dump();
+  OS << "\n       ";
+  OS << " Offset:" << getOffset() << " Value:" << getValue() << ">";
+}
+
+void MCZeroFillFragment::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCZeroFillFragment ";
+  this->MCFragment::dump();
+  OS << "\n       ";
+  OS << " Size:" << getSize() << " Alignment:" << getAlignment() << ">";
+}
+
+void MCSectionData::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCSectionData";
+  OS << " Alignment:" << getAlignment() << " Address:" << Address
+     << " Size:" << Size << " FileSize:" << FileSize
+     << " Fragments:[";
+  for (iterator it = begin(), ie = end(); it != ie; ++it) {
+    if (it != begin()) OS << ",\n      ";
+    it->dump();
+  }
+  OS << "]>";
+}
+
+void MCSymbolData::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCSymbolData Symbol:" << getSymbol()
+     << " Fragment:" << getFragment() << " Offset:" << getOffset()
+     << " Flags:" << getFlags() << " Index:" << getIndex();
+  if (isCommon())
+    OS << " (common, size:" << getCommonSize()
+       << " align: " << getCommonAlignment() << ")";
+  if (isExternal())
+    OS << " (external)";
+  if (isPrivateExtern())
+    OS << " (private extern)";
+  OS << ">";
+}
+
+void MCAssembler::dump() {
+  raw_ostream &OS = llvm::errs();
+
+  OS << "<MCAssembler\n";
+  OS << "  Sections:[";
+  for (iterator it = begin(), ie = end(); it != ie; ++it) {
+    if (it != begin()) OS << ",\n    ";
+    it->dump();
+  }
+  OS << "],\n";
+  OS << "  Symbols:[";
+
+  for (symbol_iterator it = symbol_begin(), ie = symbol_end(); it != ie; ++it) {
+    if (it != symbol_begin()) OS << ",\n    ";
+    it->dump();
+  }
+  OS << "]>\n";
+}
diff --git a/libclamav/c++/llvm/lib/MC/MCCodeEmitter.cpp b/libclamav/c++/llvm/lib/MC/MCCodeEmitter.cpp
index c122763..accb06c 100644
--- a/libclamav/c++/llvm/lib/MC/MCCodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCCodeEmitter.cpp
@@ -16,3 +16,15 @@ MCCodeEmitter::MCCodeEmitter() {
 
 MCCodeEmitter::~MCCodeEmitter() {
 }
+
+const MCFixupKindInfo &MCCodeEmitter::getFixupKindInfo(MCFixupKind Kind) const {
+  static const MCFixupKindInfo Builtins[] = {
+    { "FK_Data_1", 0, 8 },
+    { "FK_Data_2", 0, 16 },
+    { "FK_Data_4", 0, 32 },
+    { "FK_Data_8", 0, 64 }
+  };
+  
+  assert(Kind <= 3 && "Unknown fixup kind");
+  return Builtins[Kind];
+}
diff --git a/libclamav/c++/llvm/lib/MC/MCExpr.cpp b/libclamav/c++/llvm/lib/MC/MCExpr.cpp
index 1ee1b1b..e419043 100644
--- a/libclamav/c++/llvm/lib/MC/MCExpr.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCExpr.cpp
@@ -17,6 +17,8 @@ using namespace llvm;
 
 void MCExpr::print(raw_ostream &OS) const {
   switch (getKind()) {
+  case MCExpr::Target:
+    return cast<MCTargetExpr>(this)->PrintImpl(OS);
   case MCExpr::Constant:
     OS << cast<MCConstantExpr>(*this).getValue();
     return;
@@ -131,6 +133,7 @@ const MCSymbolRefExpr *MCSymbolRefExpr::Create(StringRef Name, MCContext &Ctx) {
   return Create(Ctx.GetOrCreateSymbol(Name), Ctx);
 }
 
+void MCTargetExpr::Anchor() {}
 
 /* *** */
 
@@ -168,6 +171,9 @@ static bool EvaluateSymbolicAdd(const MCValue &LHS, const MCSymbol *RHS_A,
 
 bool MCExpr::EvaluateAsRelocatable(MCValue &Res) const {
   switch (getKind()) {
+  case Target:
+    return cast<MCTargetExpr>(this)->EvaluateAsRelocatableImpl(Res);
+      
   case Constant:
     Res = MCValue::get(cast<MCConstantExpr>(this)->getValue());
     return true;
@@ -246,8 +252,8 @@ bool MCExpr::EvaluateAsRelocatable(MCValue &Res) const {
     }
 
     // FIXME: We need target hooks for the evaluation. It may be limited in
-    // width, and gas defines the result of comparisons differently from Apple
-    // as (the result is sign extended).
+    // width, and gas defines the result of comparisons and right shifts
+    // differently from Apple as.
     int64_t LHS = LHSValue.getConstant(), RHS = RHSValue.getConstant();
     int64_t Result = 0;
     switch (ABE->getOpcode()) {
diff --git a/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp b/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
index e90c03c..92a7154 100644
--- a/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCInstPrinter.cpp
@@ -8,7 +8,14 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/MC/MCInstPrinter.h"
+#include "llvm/ADT/StringRef.h"
 using namespace llvm;
 
 MCInstPrinter::~MCInstPrinter() {
 }
+
+/// getOpcodeName - Return the name of the specified opcode enum (e.g.
+/// "MOV32ri") or empty if we can't resolve it.
+StringRef MCInstPrinter::getOpcodeName(unsigned Opcode) const {
+  return "";
+}
diff --git a/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp b/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
index 143793c..0c9627d 100644
--- a/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
+++ b/libclamav/c++/llvm/lib/MC/MCMachOStreamer.cpp
@@ -87,6 +87,7 @@ public:
 
   const MCExpr *AddValueSymbols(const MCExpr *Value) {
     switch (Value->getKind()) {
+    case MCExpr::Target: assert(0 && "Can't handle target exprs yet!");
     case MCExpr::Constant:
       break;
 
@@ -332,7 +333,24 @@ void MCMachOStreamer::EmitBytes(StringRef Data, unsigned AddrSpace) {
 
 void MCMachOStreamer::EmitValue(const MCExpr *Value, unsigned Size,
                                 unsigned AddrSpace) {
-  new MCFillFragment(*AddValueSymbols(Value), Size, 1, CurSectionData);
+  // Assume the front-end will have evaluate things absolute expressions, so
+  // just create data + fixup.
+  MCDataFragment *DF = dyn_cast_or_null<MCDataFragment>(getCurrentFragment());
+  if (!DF)
+    DF = new MCDataFragment(CurSectionData);
+
+  // Avoid fixups when possible.
+  int64_t AbsValue;
+  if (Value->EvaluateAsAbsolute(AbsValue)) {
+    // FIXME: Endianness assumption.
+    for (unsigned i = 0; i != Size; ++i)
+      DF->getContents().push_back(uint8_t(AbsValue >> (i * 8)));
+  } else {
+    DF->getFixups().push_back(MCAsmFixup(DF->getContents().size(),
+                                         *AddValueSymbols(Value),
+                                         MCFixup::getKindForSize(Size)));
+    DF->getContents().resize(DF->getContents().size() + Size, 0);
+  }
 }
 
 void MCMachOStreamer::EmitValueToAlignment(unsigned ByteAlignment,
@@ -362,13 +380,25 @@ void MCMachOStreamer::EmitInstruction(const MCInst &Inst) {
   if (!Emitter)
     llvm_unreachable("no code emitter available!");
 
-  // FIXME: Emitting an instruction should cause S_ATTR_SOME_INSTRUCTIONS to
-  //        be set for the current section.
-  // FIXME: Relocations!
+  CurSectionData->setHasInstructions(true);
+
+  SmallVector<MCFixup, 4> Fixups;
   SmallString<256> Code;
   raw_svector_ostream VecOS(Code);
-  Emitter->EncodeInstruction(Inst, VecOS);
-  EmitBytes(VecOS.str(), 0);
+  Emitter->EncodeInstruction(Inst, VecOS, Fixups);
+  VecOS.flush();
+
+  // Add the fixups and data.
+  MCDataFragment *DF = dyn_cast_or_null<MCDataFragment>(getCurrentFragment());
+  if (!DF)
+    DF = new MCDataFragment(CurSectionData);
+  for (unsigned i = 0, e = Fixups.size(); i != e; ++i) {
+    MCFixup &F = Fixups[i];
+    DF->getFixups().push_back(MCAsmFixup(DF->getContents().size()+F.getOffset(),
+                                         *F.getValue(),
+                                         F.getKind()));
+  }
+  DF->getContents().append(Code.begin(), Code.end());
 }
 
 void MCMachOStreamer::Finish() {
diff --git a/libclamav/c++/llvm/lib/Support/APInt.cpp b/libclamav/c++/llvm/lib/Support/APInt.cpp
index 9d14684..3bce3f3 100644
--- a/libclamav/c++/llvm/lib/Support/APInt.cpp
+++ b/libclamav/c++/llvm/lib/Support/APInt.cpp
@@ -273,7 +273,7 @@ APInt& APInt::operator-=(const APInt& RHS) {
   return clearUnusedBits();
 }
 
-/// Multiplies an integer array, x by a a uint64_t integer and places the result
+/// Multiplies an integer array, x, by a uint64_t integer and places the result
 /// into dest.
 /// @returns the carry out of the multiplication.
 /// @brief Multiply a multi-digit APInt by a single digit (64-bit) integer.
@@ -767,8 +767,23 @@ bool APInt::isPowerOf2() const {
 }
 
 unsigned APInt::countLeadingZerosSlowCase() const {
-  unsigned Count = 0;
-  for (unsigned i = getNumWords(); i > 0u; --i) {
+  // Treat the most significand word differently because it might have
+  // meaningless bits set beyond the precision.
+  unsigned BitsInMSW = BitWidth % APINT_BITS_PER_WORD;
+  integerPart MSWMask;
+  if (BitsInMSW) MSWMask = (integerPart(1) << BitsInMSW) - 1;
+  else {
+    MSWMask = ~integerPart(0);
+    BitsInMSW = APINT_BITS_PER_WORD;
+  }
+
+  unsigned i = getNumWords();
+  integerPart MSW = pVal[i-1] & MSWMask;
+  if (MSW)
+    return CountLeadingZeros_64(MSW) - (APINT_BITS_PER_WORD - BitsInMSW);
+
+  unsigned Count = BitsInMSW;
+  for (--i; i > 0u; --i) {
     if (pVal[i-1] == 0)
       Count += APINT_BITS_PER_WORD;
     else {
@@ -776,10 +791,7 @@ unsigned APInt::countLeadingZerosSlowCase() const {
       break;
     }
   }
-  unsigned remainder = BitWidth % APINT_BITS_PER_WORD;
-  if (remainder)
-    Count -= APINT_BITS_PER_WORD - remainder;
-  return std::min(Count, BitWidth);
+  return Count;
 }
 
 static unsigned countLeadingOnes_64(uint64_t V, unsigned skip) {
@@ -1754,7 +1766,7 @@ void APInt::divide(const APInt LHS, unsigned lhsWords,
 
   // First, compose the values into an array of 32-bit words instead of
   // 64-bit words. This is a necessity of both the "short division" algorithm
-  // and the the Knuth "classical algorithm" which requires there to be native
+  // and the Knuth "classical algorithm" which requires there to be native
   // operations for +, -, and * on an m bit value with an m*2 bit result. We
   // can't use 64-bit operands here because we don't have native results of
   // 128-bits. Furthermore, casting the 64-bit values to 32-bit values won't
diff --git a/libclamav/c++/llvm/lib/Support/CommandLine.cpp b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
index fa692be..961dc1f 100644
--- a/libclamav/c++/llvm/lib/Support/CommandLine.cpp
+++ b/libclamav/c++/llvm/lib/Support/CommandLine.cpp
@@ -507,8 +507,9 @@ void cl::ParseCommandLineOptions(int argc, char **argv,
 
   // Copy the program name into ProgName, making sure not to overflow it.
   std::string ProgName = sys::Path(argv[0]).getLast();
-  if (ProgName.size() > 79) ProgName.resize(79);
-  strcpy(ProgramName, ProgName.c_str());
+  size_t Len = std::min(ProgName.size(), size_t(79));
+  memcpy(ProgramName, ProgName.data(), Len);
+  ProgramName[Len] = '\0';
 
   ProgramOverview = Overview;
   bool ErrorParsing = false;
diff --git a/libclamav/c++/llvm/lib/Support/ConstantRange.cpp b/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
index e9ddffc..2746f7a 100644
--- a/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
+++ b/libclamav/c++/llvm/lib/Support/ConstantRange.cpp
@@ -540,8 +540,10 @@ ConstantRange::add(const ConstantRange &Other) const {
 
 ConstantRange
 ConstantRange::multiply(const ConstantRange &Other) const {
-  // TODO: If either operand is a single element, round the result min anx
-  // max value to the appropriate multiple of that element.
+  // TODO: If either operand is a single element and the multiply is known to
+  // be non-wrapping, round the result min and max value to the appropriate
+  // multiple of that element. If wrapping is possible, at least adjust the
+  // range according to the greatest power-of-two factor of the single element.
 
   if (isEmptySet() || Other.isEmptySet())
     return ConstantRange(getBitWidth(), /*isFullSet=*/false);
diff --git a/libclamav/c++/llvm/lib/Support/FileUtilities.cpp b/libclamav/c++/llvm/lib/Support/FileUtilities.cpp
index 21080b6..095395f 100644
--- a/libclamav/c++/llvm/lib/Support/FileUtilities.cpp
+++ b/libclamav/c++/llvm/lib/Support/FileUtilities.cpp
@@ -13,11 +13,11 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Support/FileUtilities.h"
-#include "llvm/System/Path.h"
 #include "llvm/Support/MemoryBuffer.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/System/Path.h"
 #include "llvm/ADT/OwningPtr.h"
 #include "llvm/ADT/SmallString.h"
-#include "llvm/ADT/StringExtras.h"
 #include <cstdlib>
 #include <cstring>
 #include <cctype>
@@ -139,11 +139,11 @@ static bool CompareNumbers(const char *&F1P, const char *&F2P,
       Diff = 0;  // Both zero.
     if (Diff > RelTolerance) {
       if (ErrorMsg) {
-        *ErrorMsg = "Compared: " + ftostr(V1) + " and " + ftostr(V2) + "\n";
-        *ErrorMsg += "abs. diff = " + ftostr(std::abs(V1-V2)) + 
-                     " rel.diff = " + ftostr(Diff) + "\n";
-        *ErrorMsg += "Out of tolerance: rel/abs: " + ftostr(RelTolerance) +
-                     "/" + ftostr(AbsTolerance);
+        raw_string_ostream(*ErrorMsg)
+          << "Compared: " << V1 << " and " << V2 << '\n'
+          << "abs. diff = " << std::abs(V1-V2) << " rel.diff = " << Diff << '\n'
+          << "Out of tolerance: rel/abs: " << RelTolerance << '/'
+          << AbsTolerance;
       }
       return true;
     }
diff --git a/libclamav/c++/llvm/lib/Support/FormattedStream.cpp b/libclamav/c++/llvm/lib/Support/FormattedStream.cpp
index 9ab3666..39b6cb3 100644
--- a/libclamav/c++/llvm/lib/Support/FormattedStream.cpp
+++ b/libclamav/c++/llvm/lib/Support/FormattedStream.cpp
@@ -59,12 +59,13 @@ void formatted_raw_ostream::ComputeColumn(const char *Ptr, size_t Size) {
 /// \param MinPad - The minimum space to give after the most recent
 /// I/O, even if the current column + minpad > newcol.
 ///
-void formatted_raw_ostream::PadToColumn(unsigned NewCol) { 
+formatted_raw_ostream &formatted_raw_ostream::PadToColumn(unsigned NewCol) { 
   // Figure out what's in the buffer and add it to the column count.
   ComputeColumn(getBufferStart(), GetNumBytesInBuffer());
 
   // Output spaces until we reach the desired column.
   indent(std::max(int(NewCol - ColumnScanned), 1));
+  return *this;
 }
 
 void formatted_raw_ostream::write_impl(const char *Ptr, size_t Size) {
diff --git a/libclamav/c++/llvm/lib/Support/SourceMgr.cpp b/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
index bdc637a..83c7964 100644
--- a/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
+++ b/libclamav/c++/llvm/lib/Support/SourceMgr.cpp
@@ -35,7 +35,7 @@ SourceMgr::~SourceMgr() {
   // Delete the line # cache if allocated.
   if (LineNoCacheTy *Cache = getCache(LineNoCache))
     delete Cache;
-    
+
   while (!Buffers.empty()) {
     delete Buffers.back().Buffer;
     Buffers.pop_back();
@@ -47,7 +47,7 @@ SourceMgr::~SourceMgr() {
 /// ~0, otherwise it returns the buffer ID of the stacked file.
 unsigned SourceMgr::AddIncludeFile(const std::string &Filename,
                                    SMLoc IncludeLoc) {
-  
+
   MemoryBuffer *NewBuf = MemoryBuffer::getFile(Filename.c_str());
 
   // If the file didn't exist directly, see if it's in an include path.
@@ -55,7 +55,7 @@ unsigned SourceMgr::AddIncludeFile(const std::string &Filename,
     std::string IncFile = IncludeDirectories[i] + "/" + Filename;
     NewBuf = MemoryBuffer::getFile(IncFile.c_str());
   }
- 
+
   if (NewBuf == 0) return ~0U;
 
   return AddNewSourceBuffer(NewBuf, IncludeLoc);
@@ -79,20 +79,20 @@ int SourceMgr::FindBufferContainingLoc(SMLoc Loc) const {
 unsigned SourceMgr::FindLineNumber(SMLoc Loc, int BufferID) const {
   if (BufferID == -1) BufferID = FindBufferContainingLoc(Loc);
   assert(BufferID != -1 && "Invalid Location!");
-  
+
   MemoryBuffer *Buff = getBufferInfo(BufferID).Buffer;
-  
+
   // Count the number of \n's between the start of the file and the specified
   // location.
   unsigned LineNo = 1;
-  
+
   const char *Ptr = Buff->getBufferStart();
 
   // If we have a line number cache, and if the query is to a later point in the
   // same file, start searching from the last query location.  This optimizes
   // for the case when multiple diagnostics come out of one file in order.
   if (LineNoCacheTy *Cache = getCache(LineNoCache))
-    if (Cache->LastQueryBufferID == BufferID && 
+    if (Cache->LastQueryBufferID == BufferID &&
         Cache->LastQuery <= Loc.getPointer()) {
       Ptr = Cache->LastQuery;
       LineNo = Cache->LineNoOfQuery;
@@ -102,12 +102,12 @@ unsigned SourceMgr::FindLineNumber(SMLoc Loc, int BufferID) const {
   // we see.
   for (; SMLoc::getFromPointer(Ptr) != Loc; ++Ptr)
     if (*Ptr == '\n') ++LineNo;
-  
-  
+
+
   // Allocate the line number cache if it doesn't exist.
   if (LineNoCache == 0)
     LineNoCache = new LineNoCacheTy();
-  
+
   // Update the line # cache.
   LineNoCacheTy &Cache = *getCache(LineNoCache);
   Cache.LastQueryBufferID = BufferID;
@@ -118,12 +118,12 @@ unsigned SourceMgr::FindLineNumber(SMLoc Loc, int BufferID) const {
 
 void SourceMgr::PrintIncludeStack(SMLoc IncludeLoc, raw_ostream &OS) const {
   if (IncludeLoc == SMLoc()) return;  // Top of stack.
-  
+
   int CurBuf = FindBufferContainingLoc(IncludeLoc);
   assert(CurBuf != -1 && "Invalid or unspecified location!");
 
   PrintIncludeStack(getBufferInfo(CurBuf).IncludeLoc, OS);
-  
+
   OS << "Included from "
      << getBufferInfo(CurBuf).Buffer->getBufferIdentifier()
      << ":" << FindLineNumber(IncludeLoc, CurBuf) << ":\n";
@@ -137,12 +137,12 @@ void SourceMgr::PrintIncludeStack(SMLoc IncludeLoc, raw_ostream &OS) const {
 /// prefixed to the message.
 SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
                                    const char *Type, bool ShowLine) const {
-  
+
   // First thing to do: find the current buffer containing the specified
   // location.
   int CurBuf = FindBufferContainingLoc(Loc);
   assert(CurBuf != -1 && "Invalid or unspecified location!");
-  
+
   MemoryBuffer *CurMB = getBufferInfo(CurBuf).Buffer;
 
   // Scan backward to find the start of the line.
@@ -160,7 +160,7 @@ SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
       ++LineEnd;
     LineStr = std::string(LineStart, LineEnd);
   }
-  
+
   std::string PrintedMsg;
   if (Type) {
     PrintedMsg = Type;
@@ -173,7 +173,7 @@ SMDiagnostic SourceMgr::GetMessage(SMLoc Loc, const std::string &Msg,
                       LineStr, ShowLine);
 }
 
-void SourceMgr::PrintMessage(SMLoc Loc, const std::string &Msg, 
+void SourceMgr::PrintMessage(SMLoc Loc, const std::string &Msg,
                              const char *Type, bool ShowLine) const {
   raw_ostream &OS = errs();
 
@@ -188,7 +188,7 @@ void SourceMgr::PrintMessage(SMLoc Loc, const std::string &Msg,
 // SMDiagnostic Implementation
 //===----------------------------------------------------------------------===//
 
-void SMDiagnostic::Print(const char *ProgName, raw_ostream &S) {
+void SMDiagnostic::Print(const char *ProgName, raw_ostream &S) const {
   if (ProgName && ProgName[0])
     S << ProgName << ": ";
 
@@ -197,7 +197,7 @@ void SMDiagnostic::Print(const char *ProgName, raw_ostream &S) {
       S << "<stdin>";
     else
       S << Filename;
-  
+
     if (LineNo != -1) {
       S << ':' << LineNo;
       if (ColumnNo != -1)
@@ -205,12 +205,12 @@ void SMDiagnostic::Print(const char *ProgName, raw_ostream &S) {
     }
     S << ": ";
   }
-  
+
   S << Message << '\n';
 
   if (LineNo != -1 && ColumnNo != -1 && ShowLine) {
     S << LineContents << '\n';
-    
+
     // Print out spaces/tabs before the caret.
     for (unsigned i = 0; i != unsigned(ColumnNo); ++i)
       S << (LineContents[i] == '\t' ? '\t' : ' ');
diff --git a/libclamav/c++/llvm/lib/Support/Triple.cpp b/libclamav/c++/llvm/lib/Support/Triple.cpp
index 2fec094..5a76184 100644
--- a/libclamav/c++/llvm/lib/Support/Triple.cpp
+++ b/libclamav/c++/llvm/lib/Support/Triple.cpp
@@ -33,6 +33,7 @@ const char *Triple::getArchTypeName(ArchType Kind) {
   case ppc64:   return "powerpc64";
   case ppc:     return "powerpc";
   case sparc:   return "sparc";
+  case sparcv9: return "sparcv9";
   case systemz: return "s390x";
   case tce:     return "tce";
   case thumb:   return "thumb";
@@ -61,6 +62,7 @@ const char *Triple::getArchTypePrefix(ArchType Kind) {
   case ppc64:
   case ppc:     return "ppc";
 
+  case sparcv9:
   case sparc:   return "sparc";
 
   case x86:
@@ -127,6 +129,8 @@ Triple::ArchType Triple::getArchTypeForLLVMName(StringRef Name) {
     return ppc;
   if (Name == "sparc")
     return sparc;
+  if (Name == "sparcv9")
+    return sparcv9;
   if (Name == "systemz")
     return systemz;
   if (Name == "tce")
@@ -250,6 +254,8 @@ void Triple::Parse() const {
     Arch = mipsel;
   else if (ArchName == "sparc")
     Arch = sparc;
+  else if (ArchName == "sparcv9")
+    Arch = sparcv9;
   else if (ArchName == "s390x")
     Arch = systemz;
   else if (ArchName == "tce")
diff --git a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
index 10d7ec0..25c3fbd 100644
--- a/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
+++ b/libclamav/c++/llvm/lib/Support/raw_ostream.cpp
@@ -20,7 +20,7 @@
 #include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/ADT/STLExtras.h"
-#include "llvm/ADT/StringExtras.h"
+#include <cctype>
 #include <sys/stat.h>
 #include <sys/types.h>
 
@@ -209,7 +209,7 @@ raw_ostream &raw_ostream::operator<<(const void *P) {
 }
 
 raw_ostream &raw_ostream::operator<<(double N) {
-  return this->operator<<(ftostr(N));
+  return this->operator<<(format("%e", N));
 }
 
 
@@ -574,12 +574,18 @@ void raw_svector_ostream::resync() {
 }
 
 void raw_svector_ostream::write_impl(const char *Ptr, size_t Size) {
-  assert(Ptr == OS.end() && OS.size() + Size <= OS.capacity() &&
-         "Invalid write_impl() call!");
-
-  // We don't need to copy the bytes, just commit the bytes to the
-  // SmallVector.
-  OS.set_size(OS.size() + Size);
+  // If we're writing bytes from the end of the buffer into the smallvector, we
+  // don't need to copy the bytes, just commit the bytes because they are
+  // already in the right place.
+  if (Ptr == OS.end()) {
+    assert(OS.size() + Size <= OS.capacity() && "Invalid write_impl() call!");
+    OS.set_size(OS.size() + Size);
+  } else {
+    assert(GetNumBytesInBuffer() == 0 &&
+           "Should be writing from buffer if some bytes in it");
+    // Otherwise, do copy the bytes.
+    OS.append(Ptr, Ptr+Size);
+  }
 
   // Grow the vector if necessary.
   if (OS.capacity() - OS.size() < 64)
diff --git a/libclamav/c++/llvm/lib/System/Unix/Program.inc b/libclamav/c++/llvm/lib/System/Unix/Program.inc
index 43c3606..c10498a 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Program.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Program.inc
@@ -126,7 +126,7 @@ static void TimeOutHandler(int Sig) {
 
 static void SetMemoryLimits (unsigned size)
 {
-#if HAVE_SYS_RESOURCE_H
+#if HAVE_SYS_RESOURCE_H && HAVE_GETRLIMIT && HAVE_SETRLIMIT
   struct rlimit r;
   __typeof__ (r.rlim_cur) limit = (__typeof__ (r.rlim_cur)) (size) * 1048576;
 
@@ -323,4 +323,9 @@ bool Program::ChangeStdoutToBinary(){
   return false;
 }
 
+bool Program::ChangeStderrToBinary(){
+  // Do nothing, as Unix doesn't differentiate between text and binary.
+  return false;
+}
+
 }
diff --git a/libclamav/c++/llvm/lib/System/Unix/Signals.inc b/libclamav/c++/llvm/lib/System/Unix/Signals.inc
index b6f6d53..e5ec4df 100644
--- a/libclamav/c++/llvm/lib/System/Unix/Signals.inc
+++ b/libclamav/c++/llvm/lib/System/Unix/Signals.inc
@@ -52,7 +52,16 @@ static const int *const IntSigsEnd =
 // KillSigs - Signals that are synchronous with the program that will cause it
 // to die.
 static const int KillSigs[] = {
-  SIGILL, SIGTRAP, SIGABRT, SIGFPE, SIGBUS, SIGSEGV, SIGSYS, SIGXCPU, SIGXFSZ
+  SIGILL, SIGTRAP, SIGABRT, SIGFPE, SIGBUS, SIGSEGV
+#ifdef SIGSYS
+  , SIGSYS
+#endif
+#ifdef SIGXCPU
+  , SIGXCPU
+#endif
+#ifdef SIGXFSZ
+  , SIGXFSZ
+#endif
 #ifdef SIGEMT
   , SIGEMT
 #endif
diff --git a/libclamav/c++/llvm/lib/System/Win32/Program.inc b/libclamav/c++/llvm/lib/System/Win32/Program.inc
index a69826f..a3b40d0 100644
--- a/libclamav/c++/llvm/lib/System/Win32/Program.inc
+++ b/libclamav/c++/llvm/lib/System/Win32/Program.inc
@@ -379,4 +379,9 @@ bool Program::ChangeStdoutToBinary(){
   return result == -1;
 }
 
+bool Program::ChangeStderrToBinary(){
+  int result = _setmode( _fileno(stderr), _O_BINARY );
+  return result == -1;
+}
+
 }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARM.h b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
index 21445ad..b08f942 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARM.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARM.h
@@ -23,9 +23,7 @@ namespace llvm {
 
 class ARMBaseTargetMachine;
 class FunctionPass;
-class MachineCodeEmitter;
 class JITCodeEmitter;
-class ObjectCodeEmitter;
 class formatted_raw_ostream;
 
 // Enums corresponding to ARM condition codes
@@ -95,12 +93,8 @@ inline static const char *ARMCondCodeToString(ARMCC::CondCodes CC) {
 FunctionPass *createARMISelDag(ARMBaseTargetMachine &TM,
                                CodeGenOpt::Level OptLevel);
 
-FunctionPass *createARMCodeEmitterPass(ARMBaseTargetMachine &TM,
-                                       MachineCodeEmitter &MCE);
 FunctionPass *createARMJITCodeEmitterPass(ARMBaseTargetMachine &TM,
                                           JITCodeEmitter &JCE);
-FunctionPass *createARMObjectCodeEmitterPass(ARMBaseTargetMachine &TM,
-                                             ObjectCodeEmitter &OCE);
 
 FunctionPass *createARMLoadStoreOptimizationPass(bool PreAlloc = false);
 FunctionPass *createARMExpandPseudoPass();
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
index 1e52211..6fe7c2c 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseInstrInfo.cpp
@@ -450,10 +450,10 @@ unsigned ARMBaseInstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
     switch (Opc) {
     default:
       llvm_unreachable("Unknown or unset size field for instr!");
-    case TargetInstrInfo::IMPLICIT_DEF:
-    case TargetInstrInfo::KILL:
-    case TargetInstrInfo::DBG_LABEL:
-    case TargetInstrInfo::EH_LABEL:
+    case TargetOpcode::IMPLICIT_DEF:
+    case TargetOpcode::KILL:
+    case TargetOpcode::DBG_LABEL:
+    case TargetOpcode::EH_LABEL:
       return 0;
     }
     break;
@@ -470,9 +470,9 @@ unsigned ARMBaseInstrInfo::GetInstSizeInBytes(const MachineInstr *MI) const {
     case ARM::Int_eh_sjlj_setjmp:
       return 24;
     case ARM::tInt_eh_sjlj_setjmp:
-      return 22;
+      return 14;
     case ARM::t2Int_eh_sjlj_setjmp:
-      return 22;
+      return 14;
     case ARM::BR_JTr:
     case ARM::BR_JTm:
     case ARM::BR_JTadd:
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
index ba9e044..91e3550 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.cpp
@@ -478,7 +478,7 @@ ARMBaseRegisterInfo::UpdateRegAllocHint(unsigned Reg, unsigned NewReg,
 ///
 bool ARMBaseRegisterInfo::hasFP(const MachineFunction &MF) const {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
-  return (NoFramePointerElim ||
+  return ((NoFramePointerElim && MFI->hasCalls())||
           needsStackRealignment(MF) ||
           MFI->hasVarSizedObjects() ||
           MFI->isFrameAddressTaken());
@@ -583,14 +583,6 @@ ARMBaseRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
   SmallVector<unsigned, 4> UnspilledCS2GPRs;
   ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
 
-
-  // Calculate and set max stack object alignment early, so we can decide
-  // whether we will need stack realignment (and thus FP).
-  if (RealignStack) {
-    MachineFrameInfo *MFI = MF.getFrameInfo();
-    MFI->calculateMaxStackAlignment();
-  }
-
   // Spill R4 if Thumb2 function requires stack realignment - it will be used as
   // scratch register.
   // FIXME: It will be better just to find spare register here.
@@ -803,10 +795,10 @@ ARMBaseRegisterInfo::getFrameRegister(const MachineFunction &MF) const {
 }
 
 int
-ARMBaseRegisterInfo::getFrameIndexReference(MachineFunction &MF, int FI,
+ARMBaseRegisterInfo::getFrameIndexReference(const MachineFunction &MF, int FI,
                                             unsigned &FrameReg) const {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
-  ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
+  const ARMFunctionInfo *AFI = MF.getInfo<ARMFunctionInfo>();
   int Offset = MFI->getObjectOffset(FI) + MFI->getStackSize();
   bool isFixed = MFI->isFixedObjectIndex(FI);
 
@@ -845,7 +837,8 @@ ARMBaseRegisterInfo::getFrameIndexReference(MachineFunction &MF, int FI,
 
 
 int
-ARMBaseRegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
+ARMBaseRegisterInfo::getFrameIndexOffset(const MachineFunction &MF,
+                                         int FI) const {
   unsigned FrameReg;
   return getFrameIndexReference(MF, FI, FrameReg);
 }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
index f5ca25c..33ba21d 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMBaseRegisterInfo.h
@@ -107,9 +107,9 @@ public:
   // Debug information queries.
   unsigned getRARegister() const;
   unsigned getFrameRegister(const MachineFunction &MF) const;
-  int getFrameIndexReference(MachineFunction &MF, int FI,
+  int getFrameIndexReference(const MachineFunction &MF, int FI,
                              unsigned &FrameReg) const;
-  int getFrameIndexOffset(MachineFunction &MF, int FI) const;
+  int getFrameIndexOffset(const MachineFunction &MF, int FI) const;
 
   // Exception handling queries.
   unsigned getEHExceptionRegister() const;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
index 81e3db7..bd703f4 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMCodeEmitter.cpp
@@ -24,9 +24,7 @@
 #include "llvm/DerivedTypes.h"
 #include "llvm/Function.h"
 #include "llvm/PassManager.h"
-#include "llvm/CodeGen/MachineCodeEmitter.h"
 #include "llvm/CodeGen/JITCodeEmitter.h"
-#include "llvm/CodeGen/ObjectCodeEmitter.h"
 #include "llvm/CodeGen/MachineConstantPool.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineInstr.h"
@@ -46,42 +44,34 @@ STATISTIC(NumEmitted, "Number of machine instructions emitted");
 
 namespace {
 
-  class ARMCodeEmitter {
-  public:
-    /// getBinaryCodeForInstr - This function, generated by the
-    /// CodeEmitterGenerator using TableGen, produces the binary encoding for
-    /// machine instructions.
-    unsigned getBinaryCodeForInstr(const MachineInstr &MI);
-  };
-
-  template<class CodeEmitter>
-  class Emitter : public MachineFunctionPass, public ARMCodeEmitter {
+  class ARMCodeEmitter : public MachineFunctionPass {
     ARMJITInfo                *JTI;
     const ARMInstrInfo        *II;
     const TargetData          *TD;
     const ARMSubtarget        *Subtarget;
     TargetMachine             &TM;
-    CodeEmitter               &MCE;
+    JITCodeEmitter            &MCE;
     const std::vector<MachineConstantPoolEntry> *MCPEs;
     const std::vector<MachineJumpTableEntry> *MJTEs;
     bool IsPIC;
-
+    
     void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.addRequired<MachineModuleInfo>();
       MachineFunctionPass::getAnalysisUsage(AU);
     }
-
-  public:
+    
     static char ID;
-    explicit Emitter(TargetMachine &tm, CodeEmitter &mce)
-      : MachineFunctionPass(&ID), JTI(0), II(0), TD(0), TM(tm),
-      MCE(mce), MCPEs(0), MJTEs(0),
-      IsPIC(TM.getRelocationModel() == Reloc::PIC_) {}
-    Emitter(TargetMachine &tm, CodeEmitter &mce,
-            const ARMInstrInfo &ii, const TargetData &td)
-      : MachineFunctionPass(&ID), JTI(0), II(&ii), TD(&td), TM(tm),
-      MCE(mce), MCPEs(0), MJTEs(0),
-      IsPIC(TM.getRelocationModel() == Reloc::PIC_) {}
+  public:
+    ARMCodeEmitter(TargetMachine &tm, JITCodeEmitter &mce)
+      : MachineFunctionPass(&ID), JTI(0), II((ARMInstrInfo*)tm.getInstrInfo()),
+        TD(tm.getTargetData()), TM(tm),
+    MCE(mce), MCPEs(0), MJTEs(0),
+    IsPIC(TM.getRelocationModel() == Reloc::PIC_) {}
+    
+    /// getBinaryCodeForInstr - This function, generated by the
+    /// CodeEmitterGenerator using TableGen, produces the binary encoding for
+    /// machine instructions.
+    unsigned getBinaryCodeForInstr(const MachineInstr &MI);
 
     bool runOnMachineFunction(MachineFunction &MF);
 
@@ -94,21 +84,13 @@ namespace {
   private:
 
     void emitWordLE(unsigned Binary);
-
     void emitDWordLE(uint64_t Binary);
-
     void emitConstPoolInstruction(const MachineInstr &MI);
-
     void emitMOVi2piecesInstruction(const MachineInstr &MI);
-
     void emitLEApcrelJTInstruction(const MachineInstr &MI);
-
     void emitPseudoMoveInstruction(const MachineInstr &MI);
-
     void addPCLabel(unsigned LabelID);
-
     void emitPseudoInstruction(const MachineInstr &MI);
-
     unsigned getMachineSoRegOpValue(const MachineInstr &MI,
                                     const TargetInstrDesc &TID,
                                     const MachineOperand &MO,
@@ -176,28 +158,18 @@ namespace {
     void emitMachineBasicBlock(MachineBasicBlock *BB, unsigned Reloc,
                                intptr_t JTBase = 0);
   };
-  template <class CodeEmitter>
-  char Emitter<CodeEmitter>::ID = 0;
 }
 
-/// createARMCodeEmitterPass - Return a pass that emits the collected ARM code
-/// to the specified MCE object.
+char ARMCodeEmitter::ID = 0;
 
-FunctionPass *llvm::createARMCodeEmitterPass(ARMBaseTargetMachine &TM,
-                                             MachineCodeEmitter &MCE) {
-  return new Emitter<MachineCodeEmitter>(TM, MCE);
-}
+/// createARMJITCodeEmitterPass - Return a pass that emits the collected ARM 
+/// code to the specified MCE object.
 FunctionPass *llvm::createARMJITCodeEmitterPass(ARMBaseTargetMachine &TM,
                                                 JITCodeEmitter &JCE) {
-  return new Emitter<JITCodeEmitter>(TM, JCE);
-}
-FunctionPass *llvm::createARMObjectCodeEmitterPass(ARMBaseTargetMachine &TM,
-                                                   ObjectCodeEmitter &OCE) {
-  return new Emitter<ObjectCodeEmitter>(TM, OCE);
+  return new ARMCodeEmitter(TM, JCE);
 }
 
-template<class CodeEmitter>
-bool Emitter<CodeEmitter>::runOnMachineFunction(MachineFunction &MF) {
+bool ARMCodeEmitter::runOnMachineFunction(MachineFunction &MF) {
   assert((MF.getTarget().getRelocationModel() != Reloc::Default ||
           MF.getTarget().getRelocationModel() != Reloc::Static) &&
          "JIT relocation model must be set to static or default!");
@@ -230,8 +202,7 @@ bool Emitter<CodeEmitter>::runOnMachineFunction(MachineFunction &MF) {
 
 /// getShiftOp - Return the shift opcode (bit[6:5]) of the immediate value.
 ///
-template<class CodeEmitter>
-unsigned Emitter<CodeEmitter>::getShiftOp(unsigned Imm) const {
+unsigned ARMCodeEmitter::getShiftOp(unsigned Imm) const {
   switch (ARM_AM::getAM2ShiftOpc(Imm)) {
   default: llvm_unreachable("Unknown shift opc!");
   case ARM_AM::asr: return 2;
@@ -245,9 +216,8 @@ unsigned Emitter<CodeEmitter>::getShiftOp(unsigned Imm) const {
 
 /// getMachineOpValue - Return binary encoding of operand. If the machine
 /// operand requires relocation, record the relocation and return zero.
-template<class CodeEmitter>
-unsigned Emitter<CodeEmitter>::getMachineOpValue(const MachineInstr &MI,
-                                                 const MachineOperand &MO) {
+unsigned ARMCodeEmitter::getMachineOpValue(const MachineInstr &MI,
+                                           const MachineOperand &MO) {
   if (MO.isReg())
     return ARMRegisterInfo::getRegisterNumbering(MO.getReg());
   else if (MO.isImm())
@@ -277,10 +247,9 @@ unsigned Emitter<CodeEmitter>::getMachineOpValue(const MachineInstr &MI,
 
 /// emitGlobalAddress - Emit the specified address to the code stream.
 ///
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
-                                             bool MayNeedFarStub, bool Indirect,
-                                             intptr_t ACPV) {
+void ARMCodeEmitter::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
+                                       bool MayNeedFarStub, bool Indirect,
+                                       intptr_t ACPV) {
   MachineRelocation MR = Indirect
     ? MachineRelocation::getIndirectSymbol(MCE.getCurrentPCOffset(), Reloc,
                                            GV, ACPV, MayNeedFarStub)
@@ -292,9 +261,7 @@ void Emitter<CodeEmitter>::emitGlobalAddress(GlobalValue *GV, unsigned Reloc,
 /// emitExternalSymbolAddress - Arrange for the address of an external symbol to
 /// be emitted to the current location in the function, and allow it to be PC
 /// relative.
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitExternalSymbolAddress(const char *ES,
-                                                     unsigned Reloc) {
+void ARMCodeEmitter::emitExternalSymbolAddress(const char *ES, unsigned Reloc) {
   MCE.addRelocation(MachineRelocation::getExtSym(MCE.getCurrentPCOffset(),
                                                  Reloc, ES));
 }
@@ -302,9 +269,7 @@ void Emitter<CodeEmitter>::emitExternalSymbolAddress(const char *ES,
 /// emitConstPoolAddress - Arrange for the address of an constant pool
 /// to be emitted to the current location in the function, and allow it to be PC
 /// relative.
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitConstPoolAddress(unsigned CPI,
-                                                unsigned Reloc) {
+void ARMCodeEmitter::emitConstPoolAddress(unsigned CPI, unsigned Reloc) {
   // Tell JIT emitter we'll resolve the address.
   MCE.addRelocation(MachineRelocation::getConstPool(MCE.getCurrentPCOffset(),
                                                     Reloc, CPI, 0, true));
@@ -313,37 +278,31 @@ void Emitter<CodeEmitter>::emitConstPoolAddress(unsigned CPI,
 /// emitJumpTableAddress - Arrange for the address of a jump table to
 /// be emitted to the current location in the function, and allow it to be PC
 /// relative.
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitJumpTableAddress(unsigned JTIndex,
-                                                unsigned Reloc) {
+void ARMCodeEmitter::emitJumpTableAddress(unsigned JTIndex, unsigned Reloc) {
   MCE.addRelocation(MachineRelocation::getJumpTable(MCE.getCurrentPCOffset(),
                                                     Reloc, JTIndex, 0, true));
 }
 
 /// emitMachineBasicBlock - Emit the specified address basic block.
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMachineBasicBlock(MachineBasicBlock *BB,
-                                              unsigned Reloc, intptr_t JTBase) {
+void ARMCodeEmitter::emitMachineBasicBlock(MachineBasicBlock *BB,
+                                           unsigned Reloc, intptr_t JTBase) {
   MCE.addRelocation(MachineRelocation::getBB(MCE.getCurrentPCOffset(),
                                              Reloc, BB, JTBase));
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitWordLE(unsigned Binary) {
+void ARMCodeEmitter::emitWordLE(unsigned Binary) {
   DEBUG(errs() << "  0x";
         errs().write_hex(Binary) << "\n");
   MCE.emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitDWordLE(uint64_t Binary) {
+void ARMCodeEmitter::emitDWordLE(uint64_t Binary) {
   DEBUG(errs() << "  0x";
         errs().write_hex(Binary) << "\n");
   MCE.emitDWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitInstruction(const MachineInstr &MI) {
   DEBUG(errs() << "JIT: " << (void*)MCE.getCurrentPCValue() << ":\t" << MI);
 
   MCE.processDebugLoc(MI.getDebugLoc(), true);
@@ -412,8 +371,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI) {
   MCE.processDebugLoc(MI.getDebugLoc(), false);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitConstPoolInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitConstPoolInstruction(const MachineInstr &MI) {
   unsigned CPI = MI.getOperand(0).getImm();       // CP instruction index.
   unsigned CPIndex = MI.getOperand(1).getIndex(); // Actual cp entry index.
   const MachineConstantPoolEntry &MCPE = (*MCPEs)[CPIndex];
@@ -475,8 +433,7 @@ void Emitter<CodeEmitter>::emitConstPoolInstruction(const MachineInstr &MI) {
   }
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMOVi2piecesInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitMOVi2piecesInstruction(const MachineInstr &MI) {
   const MachineOperand &MO0 = MI.getOperand(0);
   const MachineOperand &MO1 = MI.getOperand(1);
   assert(MO1.isImm() && ARM_AM::getSOImmVal(MO1.isImm()) != -1 &&
@@ -518,8 +475,7 @@ void Emitter<CodeEmitter>::emitMOVi2piecesInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitLEApcrelJTInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitLEApcrelJTInstruction(const MachineInstr &MI) {
   // It's basically add r, pc, (LJTI - $+8)
 
   const TargetInstrDesc &TID = MI.getDesc();
@@ -546,8 +502,7 @@ void Emitter<CodeEmitter>::emitLEApcrelJTInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitPseudoMoveInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitPseudoMoveInstruction(const MachineInstr &MI) {
   unsigned Opcode = MI.getDesc().Opcode;
 
   // Part of binary is determined by TableGn.
@@ -586,21 +541,19 @@ void Emitter<CodeEmitter>::emitPseudoMoveInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::addPCLabel(unsigned LabelID) {
+void ARMCodeEmitter::addPCLabel(unsigned LabelID) {
   DEBUG(errs() << "  ** LPC" << LabelID << " @ "
         << (void*)MCE.getCurrentPCValue() << '\n');
   JTI->addPCLabelAddr(LabelID, MCE.getCurrentPCValue());
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitPseudoInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitPseudoInstruction(const MachineInstr &MI) {
   unsigned Opcode = MI.getDesc().Opcode;
   switch (Opcode) {
   default:
     llvm_unreachable("ARMCodeEmitter::emitPseudoInstruction");
   // FIXME: Add support for MOVimm32.
-  case TargetInstrInfo::INLINEASM: {
+  case TargetOpcode::INLINEASM: {
     // We allow inline assembler nodes with empty bodies - they can
     // implicitly define registers, which is ok for JIT.
     if (MI.getOperand(0).getSymbolName()[0]) {
@@ -608,12 +561,12 @@ void Emitter<CodeEmitter>::emitPseudoInstruction(const MachineInstr &MI) {
     }
     break;
   }
-  case TargetInstrInfo::DBG_LABEL:
-  case TargetInstrInfo::EH_LABEL:
+  case TargetOpcode::DBG_LABEL:
+  case TargetOpcode::EH_LABEL:
     MCE.emitLabel(MI.getOperand(0).getImm());
     break;
-  case TargetInstrInfo::IMPLICIT_DEF:
-  case TargetInstrInfo::KILL:
+  case TargetOpcode::IMPLICIT_DEF:
+  case TargetOpcode::KILL:
     // Do nothing.
     break;
   case ARM::CONSTPOOL_ENTRY:
@@ -662,8 +615,7 @@ void Emitter<CodeEmitter>::emitPseudoInstruction(const MachineInstr &MI) {
   }
 }
 
-template<class CodeEmitter>
-unsigned Emitter<CodeEmitter>::getMachineSoRegOpValue(
+unsigned ARMCodeEmitter::getMachineSoRegOpValue(
                                                 const MachineInstr &MI,
                                                 const TargetInstrDesc &TID,
                                                 const MachineOperand &MO,
@@ -722,8 +674,7 @@ unsigned Emitter<CodeEmitter>::getMachineSoRegOpValue(
   return Binary | ARM_AM::getSORegOffset(MO2.getImm()) << 7;
 }
 
-template<class CodeEmitter>
-unsigned Emitter<CodeEmitter>::getMachineSoImmOpValue(unsigned SoImm) {
+unsigned ARMCodeEmitter::getMachineSoImmOpValue(unsigned SoImm) {
   int SoImmVal = ARM_AM::getSOImmVal(SoImm);
   assert(SoImmVal != -1 && "Not a valid so_imm value!");
 
@@ -736,8 +687,7 @@ unsigned Emitter<CodeEmitter>::getMachineSoImmOpValue(unsigned SoImm) {
   return Binary;
 }
 
-template<class CodeEmitter>
-unsigned Emitter<CodeEmitter>::getAddrModeSBit(const MachineInstr &MI,
+unsigned ARMCodeEmitter::getAddrModeSBit(const MachineInstr &MI,
                                              const TargetInstrDesc &TID) const {
   for (unsigned i = MI.getNumOperands(), e = TID.getNumOperands(); i != e; --i){
     const MachineOperand &MO = MI.getOperand(i-1);
@@ -747,8 +697,7 @@ unsigned Emitter<CodeEmitter>::getAddrModeSBit(const MachineInstr &MI,
   return 0;
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitDataProcessingInstruction(
+void ARMCodeEmitter::emitDataProcessingInstruction(
                                                    const MachineInstr &MI,
                                                    unsigned ImplicitRd,
                                                    unsigned ImplicitRn) {
@@ -814,8 +763,7 @@ void Emitter<CodeEmitter>::emitDataProcessingInstruction(
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitLoadStoreInstruction(
+void ARMCodeEmitter::emitLoadStoreInstruction(
                                               const MachineInstr &MI,
                                               unsigned ImplicitRd,
                                               unsigned ImplicitRn) {
@@ -890,8 +838,7 @@ void Emitter<CodeEmitter>::emitLoadStoreInstruction(
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMiscLoadStoreInstruction(const MachineInstr &MI,
+void ARMCodeEmitter::emitMiscLoadStoreInstruction(const MachineInstr &MI,
                                                         unsigned ImplicitRn) {
   const TargetInstrDesc &TID = MI.getDesc();
   unsigned Form = TID.TSFlags & ARMII::FormMask;
@@ -978,8 +925,7 @@ static unsigned getAddrModeUPBits(unsigned Mode) {
   return Binary;
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitLoadStoreMultipleInstruction(
+void ARMCodeEmitter::emitLoadStoreMultipleInstruction(
                                                        const MachineInstr &MI) {
   // Part of binary is determined by TableGn.
   unsigned Binary = getBinaryCodeForInstr(MI);
@@ -1012,8 +958,7 @@ void Emitter<CodeEmitter>::emitLoadStoreMultipleInstruction(
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMulFrmInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitMulFrmInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   // Part of binary is determined by TableGn.
@@ -1050,8 +995,7 @@ void Emitter<CodeEmitter>::emitMulFrmInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitExtendInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitExtendInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   // Part of binary is determined by TableGn.
@@ -1088,8 +1032,7 @@ void Emitter<CodeEmitter>::emitExtendInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMiscArithInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitMiscArithInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   // Part of binary is determined by TableGn.
@@ -1127,8 +1070,7 @@ void Emitter<CodeEmitter>::emitMiscArithInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitBranchInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitBranchInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   if (TID.Opcode == ARM::TPsoft) {
@@ -1147,8 +1089,7 @@ void Emitter<CodeEmitter>::emitBranchInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitInlineJumpTable(unsigned JTIndex) {
+void ARMCodeEmitter::emitInlineJumpTable(unsigned JTIndex) {
   // Remember the base address of the inline jump table.
   uintptr_t JTBase = MCE.getCurrentPCValue();
   JTI->addJumpTableBaseAddr(JTIndex, JTBase);
@@ -1168,8 +1109,7 @@ void Emitter<CodeEmitter>::emitInlineJumpTable(unsigned JTIndex) {
   }
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMiscBranchInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitMiscBranchInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   // Handle jump tables.
@@ -1250,8 +1190,7 @@ static unsigned encodeVFPRm(const MachineInstr &MI, unsigned OpIdx) {
   return Binary;
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitVFPArithInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitVFPArithInstruction(const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
 
   // Part of binary is determined by TableGn.
@@ -1290,8 +1229,7 @@ void Emitter<CodeEmitter>::emitVFPArithInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitVFPConversionInstruction(
+void ARMCodeEmitter::emitVFPConversionInstruction(
       const MachineInstr &MI) {
   const TargetInstrDesc &TID = MI.getDesc();
   unsigned Form = TID.TSFlags & ARMII::FormMask;
@@ -1348,8 +1286,7 @@ void Emitter<CodeEmitter>::emitVFPConversionInstruction(
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitVFPLoadStoreInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitVFPLoadStoreInstruction(const MachineInstr &MI) {
   // Part of binary is determined by TableGn.
   unsigned Binary = getBinaryCodeForInstr(MI);
 
@@ -1383,8 +1320,7 @@ void Emitter<CodeEmitter>::emitVFPLoadStoreInstruction(const MachineInstr &MI) {
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitVFPLoadStoreMultipleInstruction(
+void ARMCodeEmitter::emitVFPLoadStoreMultipleInstruction(
                                                        const MachineInstr &MI) {
   // Part of binary is determined by TableGn.
   unsigned Binary = getBinaryCodeForInstr(MI);
@@ -1419,8 +1355,7 @@ void Emitter<CodeEmitter>::emitVFPLoadStoreMultipleInstruction(
   emitWordLE(Binary);
 }
 
-template<class CodeEmitter>
-void Emitter<CodeEmitter>::emitMiscInstruction(const MachineInstr &MI) {
+void ARMCodeEmitter::emitMiscInstruction(const MachineInstr &MI) {
   // Part of binary is determined by TableGn.
   unsigned Binary = getBinaryCodeForInstr(MI);
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
index 88c268c..8fa3c04 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMConstantIslandPass.cpp
@@ -302,9 +302,9 @@ bool ARMConstantIslands::runOnMachineFunction(MachineFunction &MF) {
   // Thumb1 functions containing constant pools get 4-byte alignment.
   // This is so we can keep exact track of where the alignment padding goes.
 
-  // Set default. Thumb1 function is 2-byte aligned, ARM and Thumb2 are 4-byte
-  // aligned.
-  AFI->setAlign(isThumb1 ? 1U : 2U);
+  // ARM and Thumb2 functions need to be 4-byte aligned.
+  if (!isThumb1)
+    MF.EnsureAlignment(2);  // 2 = log2(4)
 
   // Perform the initial placement of the constant pool entries.  To start with,
   // we put them all at the end of the function.
@@ -312,7 +312,7 @@ bool ARMConstantIslands::runOnMachineFunction(MachineFunction &MF) {
   if (!MCP.isEmpty()) {
     DoInitialPlacement(MF, CPEMIs);
     if (isThumb1)
-      AFI->setAlign(2U);
+      MF.EnsureAlignment(2);  // 2 = log2(4)
   }
 
   /// The next UID to take is the first unused one.
@@ -506,7 +506,7 @@ void ARMConstantIslands::InitialFunctionScan(MachineFunction &MF,
         case ARM::tBR_JTr:
           // A Thumb1 table jump may involve padding; for the offsets to
           // be right, functions containing these must be 4-byte aligned.
-          AFI->setAlign(2U);
+          MF.EnsureAlignment(2U);
           if ((Offset+MBBSize)%4 != 0 || HasInlineAsm)
             // FIXME: Add a pseudo ALIGN instruction instead.
             MBBSize += 2;           // padding
@@ -732,7 +732,7 @@ MachineBasicBlock *ARMConstantIslands::SplitBlockBeforeInstr(MachineInstr *MI) {
 
     // This pass should be run after register allocation, so there should be no
     // PHI nodes to update.
-    assert((Succ->empty() || Succ->begin()->getOpcode() != TargetInstrInfo::PHI)
+    assert((Succ->empty() || !Succ->begin()->isPHI())
            && "PHI nodes should be eliminated by now!");
   }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
index a260050..a458269 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelDAGToDAG.cpp
@@ -1007,12 +1007,12 @@ SDNode *ARMDAGToDAGISel::SelectDYN_ALLOC(SDNode *N) {
 SDNode *ARMDAGToDAGISel::PairDRegs(EVT VT, SDValue V0, SDValue V1) {
   DebugLoc dl = V0.getNode()->getDebugLoc();
   SDValue Undef =
-    SDValue(CurDAG->getMachineNode(TargetInstrInfo::IMPLICIT_DEF, dl, VT), 0);
+    SDValue(CurDAG->getMachineNode(TargetOpcode::IMPLICIT_DEF, dl, VT), 0);
   SDValue SubReg0 = CurDAG->getTargetConstant(ARM::DSUBREG_0, MVT::i32);
   SDValue SubReg1 = CurDAG->getTargetConstant(ARM::DSUBREG_1, MVT::i32);
-  SDNode *Pair = CurDAG->getMachineNode(TargetInstrInfo::INSERT_SUBREG, dl,
+  SDNode *Pair = CurDAG->getMachineNode(TargetOpcode::INSERT_SUBREG, dl,
                                         VT, Undef, V0, SubReg0);
-  return CurDAG->getMachineNode(TargetInstrInfo::INSERT_SUBREG, dl,
+  return CurDAG->getMachineNode(TargetOpcode::INSERT_SUBREG, dl,
                                 VT, SDValue(Pair, 0), V1, SubReg1);
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 76c6a27..614e684 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -897,11 +897,13 @@ void ARMTargetLowering::PassF64ArgInRegs(DebugLoc dl, SelectionDAG &DAG,
 SDValue
 ARMTargetLowering::LowerCall(SDValue Chain, SDValue Callee,
                              CallingConv::ID CallConv, bool isVarArg,
-                             bool isTailCall,
+                             bool &isTailCall,
                              const SmallVectorImpl<ISD::OutputArg> &Outs,
                              const SmallVectorImpl<ISD::InputArg> &Ins,
                              DebugLoc dl, SelectionDAG &DAG,
                              SmallVectorImpl<SDValue> &InVals) {
+  // ARM target does not yet support tail call optimization.
+  isTailCall = false;
 
   // Analyze operands of the call, assigning locations to each operand.
   SmallVector<CCValAssign, 16> ArgLocs;
@@ -1438,7 +1440,8 @@ SDValue ARMTargetLowering::LowerGLOBAL_OFFSET_TABLE(SDValue Op,
 }
 
 SDValue
-ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
+ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG,
+                                           const ARMSubtarget *Subtarget) {
   unsigned IntNo = cast<ConstantSDNode>(Op.getOperand(0))->getZExtValue();
   DebugLoc dl = Op.getDebugLoc();
   switch (IntNo) {
@@ -1474,7 +1477,11 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) {
     return Result;
   }
   case Intrinsic::eh_sjlj_setjmp:
-    return DAG.getNode(ARMISD::EH_SJLJ_SETJMP, dl, MVT::i32, Op.getOperand(1));
+    SDValue Val = Subtarget->isThumb() ?
+      DAG.getCopyFromReg(DAG.getEntryNode(), dl, ARM::SP, MVT::i32) :
+      DAG.getConstant(0, MVT::i32);
+    return DAG.getNode(ARMISD::EH_SJLJ_SETJMP, dl, MVT::i32, Op.getOperand(1),
+                       Val);
   }
 }
 
@@ -3023,7 +3030,8 @@ SDValue ARMTargetLowering::LowerOperation(SDValue Op, SelectionDAG &DAG) {
   case ISD::RETURNADDR:    break;
   case ISD::FRAMEADDR:     return LowerFRAMEADDR(Op, DAG);
   case ISD::GLOBAL_OFFSET_TABLE: return LowerGLOBAL_OFFSET_TABLE(Op, DAG);
-  case ISD::INTRINSIC_WO_CHAIN: return LowerINTRINSIC_WO_CHAIN(Op, DAG);
+  case ISD::INTRINSIC_WO_CHAIN: return LowerINTRINSIC_WO_CHAIN(Op, DAG,
+                                                               Subtarget);
   case ISD::BIT_CONVERT:   return ExpandBIT_CONVERT(Op.getNode(), DAG);
   case ISD::SHL:
   case ISD::SRL:
@@ -3852,8 +3860,11 @@ bool ARMTargetLowering::allowsUnalignedMemoryAccesses(EVT VT) const {
   if (!Subtarget->hasV6Ops())
     // Pre-v6 does not support unaligned mem access.
     return false;
-  else if (!Subtarget->hasV6Ops()) {
-    // v6 may or may not support unaligned mem access.
+  else {
+    // v6+ may or may not support unaligned mem access depending on the system
+    // configuration.
+    // FIXME: This is pretty conservative. Should we provide cmdline option to
+    // control the behaviour?
     if (!Subtarget->isTargetDarwin())
       return false;
   }
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
index cd9c027..3c5df45 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMISelLowering.h
@@ -278,7 +278,8 @@ namespace llvm {
                              const CCValAssign &VA,
                              ISD::ArgFlagsTy Flags);
     SDValue LowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG);
-    SDValue LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG);
+    SDValue LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG,
+                                    const ARMSubtarget *Subtarget);
     SDValue LowerBlockAddress(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddressDarwin(SDValue Op, SelectionDAG &DAG);
     SDValue LowerGlobalAddressELF(SDValue Op, SelectionDAG &DAG);
@@ -319,7 +320,7 @@ namespace llvm {
     virtual SDValue
       LowerCall(SDValue Chain, SDValue Callee,
                 CallingConv::ID CallConv, bool isVarArg,
-                bool isTailCall,
+                bool &isTailCall,
                 const SmallVectorImpl<ISD::OutputArg> &Outs,
                 const SmallVectorImpl<ISD::InputArg> &Ins,
                 DebugLoc dl, SelectionDAG &DAG,
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
index 28b2821..db60458 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrFormats.td
@@ -56,6 +56,9 @@ def NEONGetLnFrm  : Format<25>;
 def NEONSetLnFrm  : Format<26>;
 def NEONDupFrm    : Format<27>;
 
+def MiscFrm       : Format<29>;
+def ThumbMiscFrm  : Format<30>;
+
 // Misc flags.
 
 // the instruction has a Rn register operand.
@@ -1246,75 +1249,99 @@ class AXSI5<dag oops, dag iops, InstrItinClass itin,
 }
 
 // Double precision, unary
-class ADuI<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3, dag oops, dag iops,
-           InstrItinClass itin, string opc, string asm, list<dag> pattern>
+class ADuI<bits<5> opcod1, bits<2> opcod2, bits<4> opcod3, bits<2> opcod4,
+           bit opcod5, dag oops, dag iops, InstrItinClass itin, string opc,
+           string asm, list<dag> pattern>
   : VFPAI<oops, iops, VFPUnaryFrm, itin, opc, asm, pattern> {
-  let Inst{27-20} = opcod1;
-  let Inst{19-16} = opcod2;
+  let Inst{27-23} = opcod1;
+  let Inst{21-20} = opcod2;
+  let Inst{19-16} = opcod3;
   let Inst{11-8}  = 0b1011;
-  let Inst{7-4}   = opcod3;
+  let Inst{7-6}   = opcod4;
+  let Inst{4}     = opcod5;
 }
 
 // Double precision, binary
-class ADbI<bits<8> opcod, dag oops, dag iops, InstrItinClass itin,
-           string opc, string asm, list<dag> pattern>
+class ADbI<bits<5> opcod1, bits<2> opcod2, bit op6, bit op4, dag oops,
+       dag iops, InstrItinClass itin, string opc, string asm, list<dag> pattern>
   : VFPAI<oops, iops, VFPBinaryFrm, itin, opc, asm, pattern> {
-  let Inst{27-20} = opcod;
+  let Inst{27-23} = opcod1;
+  let Inst{21-20} = opcod2;
   let Inst{11-8}  = 0b1011;
+  let Inst{6} = op6;
+  let Inst{4} = op4;
 }
 
 // Single precision, unary
-class ASuI<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3, dag oops, dag iops,
-           InstrItinClass itin, string opc, string asm, list<dag> pattern>
+class ASuI<bits<5> opcod1, bits<2> opcod2, bits<4> opcod3, bits<2> opcod4,
+           bit opcod5, dag oops, dag iops, InstrItinClass itin, string opc,
+           string asm, list<dag> pattern>
   : VFPAI<oops, iops, VFPUnaryFrm, itin, opc, asm, pattern> {
-  // Bits 22 (D bit) and 5 (M bit) will be changed during instruction encoding.
-  let Inst{27-20} = opcod1;
-  let Inst{19-16} = opcod2;
+  let Inst{27-23} = opcod1;
+  let Inst{21-20} = opcod2;
+  let Inst{19-16} = opcod3;
   let Inst{11-8}  = 0b1010;
-  let Inst{7-4}   = opcod3;
+  let Inst{7-6}   = opcod4;
+  let Inst{4}     = opcod5;
 }
 
 // Single precision unary, if no NEON
 // Same as ASuI except not available if NEON is enabled
-class ASuIn<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3, dag oops, dag iops,
-            InstrItinClass itin, string opc, string asm, list<dag> pattern>
-  : ASuI<opcod1, opcod2, opcod3, oops, iops, itin, opc, asm, pattern> {
+class ASuIn<bits<5> opcod1, bits<2> opcod2, bits<4> opcod3, bits<2> opcod4,
+            bit opcod5, dag oops, dag iops, InstrItinClass itin, string opc,
+            string asm, list<dag> pattern>
+  : ASuI<opcod1, opcod2, opcod3, opcod4, opcod5, oops, iops, itin, opc, asm,
+         pattern> {
   list<Predicate> Predicates = [HasVFP2,DontUseNEONForFP];
 }
 
 // Single precision, binary
-class ASbI<bits<8> opcod, dag oops, dag iops, InstrItinClass itin,
-           string opc, string asm, list<dag> pattern>
+class ASbI<bits<5> opcod1, bits<2> opcod2, bit op6, bit op4, dag oops, dag iops,
+           InstrItinClass itin, string opc, string asm, list<dag> pattern>
   : VFPAI<oops, iops, VFPBinaryFrm, itin, opc, asm, pattern> {
-  // Bit 22 (D bit) can be changed during instruction encoding.
-  let Inst{27-20} = opcod;
+  let Inst{27-23} = opcod1;
+  let Inst{21-20} = opcod2;
   let Inst{11-8}  = 0b1010;
+  let Inst{6} = op6;
+  let Inst{4} = op4;
 }
 
 // Single precision binary, if no NEON
 // Same as ASbI except not available if NEON is enabled
-class ASbIn<bits<8> opcod, dag oops, dag iops, InstrItinClass itin,
-            string opc, string asm, list<dag> pattern>
-  : ASbI<opcod, oops, iops, itin, opc, asm, pattern> {
+class ASbIn<bits<5> opcod1, bits<2> opcod2, bit op6, bit op4, dag oops,
+       dag iops, InstrItinClass itin, string opc, string asm, list<dag> pattern>
+  : ASbI<opcod1, opcod2, op6, op4, oops, iops, itin, opc, asm, pattern> {
   list<Predicate> Predicates = [HasVFP2,DontUseNEONForFP];
 }
 
 // VFP conversion instructions
-class AVConv1I<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3,
-               dag oops, dag iops, InstrItinClass itin,
-               string opc, string asm, list<dag> pattern>
+class AVConv1I<bits<5> opcod1, bits<2> opcod2, bits<4> opcod3, bits<4> opcod4,
+               dag oops, dag iops, InstrItinClass itin, string opc, string asm,
+               list<dag> pattern>
   : VFPAI<oops, iops, VFPConv1Frm, itin, opc, asm, pattern> {
-  let Inst{27-20} = opcod1;
-  let Inst{19-16} = opcod2;
-  let Inst{11-8}  = opcod3;
+  let Inst{27-23} = opcod1;
+  let Inst{21-20} = opcod2;
+  let Inst{19-16} = opcod3;
+  let Inst{11-8}  = opcod4;
   let Inst{6}     = 1;
+  let Inst{4}     = 0;
+}
+
+// VFP conversion between floating-point and fixed-point
+class AVConv1XI<bits<5> op1, bits<2> op2, bits<4> op3, bits<4> op4, bit op5,
+               dag oops, dag iops, InstrItinClass itin, string opc, string asm,
+               list<dag> pattern>
+  : AVConv1I<op1, op2, op3, op4, oops, iops, itin, opc, asm, pattern> {
+  // size (fixed-point number): sx == 0 ? 16 : 32
+  let Inst{7} = op5; // sx
 }
 
 // VFP conversion instructions, if no NEON
-class AVConv1In<bits<8> opcod1, bits<4> opcod2, bits<4> opcod3,
+class AVConv1In<bits<5> opcod1, bits<2> opcod2, bits<4> opcod3, bits<4> opcod4,
                 dag oops, dag iops, InstrItinClass itin,
                 string opc, string asm, list<dag> pattern>
-  : AVConv1I<opcod1, opcod2, opcod3, oops, iops, itin, opc, asm, pattern> {
+  : AVConv1I<opcod1, opcod2, opcod3, opcod4, oops, iops, itin, opc, asm,
+             pattern> {
   list<Predicate> Predicates = [HasVFP2,DontUseNEONForFP];
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
index af508ee..c733215 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrInfo.td
@@ -44,7 +44,8 @@ def SDT_ARMPICAdd  : SDTypeProfile<1, 2, [SDTCisSameAs<0, 1>,
                                           SDTCisPtrTy<1>, SDTCisVT<2, i32>]>;
 
 def SDT_ARMThreadPointer : SDTypeProfile<1, 0, [SDTCisPtrTy<0>]>;
-def SDT_ARMEH_SJLJ_Setjmp : SDTypeProfile<1, 1, [SDTCisInt<0>, SDTCisPtrTy<1>]>;
+def SDT_ARMEH_SJLJ_Setjmp : SDTypeProfile<1, 2, [SDTCisInt<0>, SDTCisPtrTy<1>,
+                                                 SDTCisInt<2>]>;
 
 def SDT_ARMMEMBARRIERV7  : SDTypeProfile<0, 0, []>;
 def SDT_ARMSYNCBARRIERV7 : SDTypeProfile<0, 0, []>;
@@ -604,6 +605,102 @@ PseudoInst<(outs), (ins i32imm:$amt, pred:$p), NoItinerary,
            [(ARMcallseq_start timm:$amt)]>;
 }
 
+def NOP : AI<(outs), (ins), MiscFrm, NoItinerary, "nop", "",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV6T2]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-0} = 0b00000000;
+}
+
+def YIELD : AI<(outs), (ins), MiscFrm, NoItinerary, "yield", "",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV6T2]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-0} = 0b00000001;
+}
+
+def WFE : AI<(outs), (ins), MiscFrm, NoItinerary, "wfe", "",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV6T2]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-0} = 0b00000010;
+}
+
+def WFI : AI<(outs), (ins), MiscFrm, NoItinerary, "wfi", "",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV6T2]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-0} = 0b00000011;
+}
+
+def SEV : AI<(outs), (ins), MiscFrm, NoItinerary, "sev", "",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV6T2]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-0} = 0b00000100;
+}
+
+// The i32imm operand $val can be used by a debugger to store more information
+// about the breakpoint.
+def BKPT : AI<(outs), (ins i32imm:$val), MiscFrm, NoItinerary, "bkpt", "\t$val",
+              [/* For disassembly only; pattern left blank */]>,
+           Requires<[IsARM]> {
+  let Inst{27-20} = 0b00010010;
+  let Inst{7-4} = 0b0111;
+}
+
+// Change Processor State is a system instruction -- for disassembly only.
+// The singleton $opt operand contains the following information:
+// opt{4-0} = mode from Inst{4-0}
+// opt{5} = changemode from Inst{17}
+// opt{8-6} = AIF from Inst{8-6}
+// opt{10-9} = imod from Inst{19-18} with 0b10 as enable and 0b11 as disable
+def CPS : AXI<(outs),(ins i32imm:$opt), MiscFrm, NoItinerary, "cps${opt:cps}",
+              [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{27-20} = 0b00010000;
+  let Inst{16} = 0;
+  let Inst{5} = 0;
+}
+
+def SETENDBE : AXI<(outs),(ins), MiscFrm, NoItinerary, "setend\tbe",
+                   [/* For disassembly only; pattern left blank */]>,
+               Requires<[IsARM]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{27-20} = 0b00010000;
+  let Inst{16} = 1;
+  let Inst{9} = 1;
+  let Inst{7-4} = 0b0000;
+}
+
+def SETENDLE : AXI<(outs),(ins), MiscFrm, NoItinerary, "setend\tle",
+                   [/* For disassembly only; pattern left blank */]>,
+               Requires<[IsARM]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{27-20} = 0b00010000;
+  let Inst{16} = 1;
+  let Inst{9} = 0;
+  let Inst{7-4} = 0b0000;
+}
+
+def DBG : AI<(outs), (ins i32imm:$opt), MiscFrm, NoItinerary, "dbg", "\t$opt",
+             [/* For disassembly only; pattern left blank */]>,
+          Requires<[IsARM, HasV7]> {
+  let Inst{27-16} = 0b001100100000;
+  let Inst{7-4} = 0b1111;
+}
+
+// A5.4 Permanently UNDEFINED instructions.
+def TRAP : AI<(outs), (ins), MiscFrm, NoItinerary, "trap", "",
+              [/* For disassembly only; pattern left blank */]>,
+           Requires<[IsARM]> {
+  let Inst{27-25} = 0b011;
+  let Inst{24-20} = 0b11111;
+  let Inst{7-5} = 0b111;
+  let Inst{4} = 0b1;
+}
+
 // Address computation and loads and stores in PIC mode.
 let isNotDuplicable = 1 in {
 def PICADD : AXI1<0b0100, (outs GPR:$dst), (ins GPR:$a, pclabel:$cp, pred:$p),
@@ -826,6 +923,20 @@ let isBranch = 1, isTerminator = 1 in {
                [/*(ARMbrcond bb:$target, imm:$cc, CCR:$ccr)*/]>;
 }
 
+// Branch and Exchange Jazelle -- for disassembly only
+def BXJ : ABI<0b0001, (outs), (ins GPR:$func), NoItinerary, "bxj", "\t$func",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0010;
+  //let Inst{19-8} = 0xfff;
+  let Inst{7-4} = 0b0010;
+}
+
+// Supervisor call (software interrupt) -- for disassembly only
+let isCall = 1 in {
+def SVC : ABI<0b1111, (outs), (ins i32imm:$svc), IIC_Br, "svc", "\t$svc",
+              [/* For disassembly only; pattern left blank */]>;
+}
+
 //===----------------------------------------------------------------------===//
 //  Load / store Instructions.
 //
@@ -908,6 +1019,20 @@ def LDRSB_POST: AI3ldsbpo<(outs GPR:$dst, GPR:$base_wb),
                    "ldrsb", "\t$dst, [$base], $offset", "$base = $base_wb", []>;
 }
 
+// LDRT and LDRBT are for disassembly only.
+
+def LDRT : AI2ldwpo<(outs GPR:$dst, GPR:$base_wb),
+                   (ins GPR:$base, am2offset:$offset), LdFrm, IIC_iLoadru,
+                   "ldrt", "\t$dst, [$base], $offset", "$base = $base_wb", []> {
+  let Inst{21} = 1; // overwrite
+}
+
+def LDRBT : AI2ldbpo<(outs GPR:$dst, GPR:$base_wb),
+                   (ins GPR:$base,am2offset:$offset), LdFrm, IIC_iLoadru,
+                   "ldrb", "\t$dst, [$base], $offset", "$base = $base_wb", []> {
+  let Inst{21} = 1; // overwrite
+}
+
 // Store
 def STR  : AI2stw<(outs), (ins GPR:$src, addrmode2:$addr), StFrm, IIC_iStorer,
                "str", "\t$src, $addr",
@@ -971,6 +1096,24 @@ def STRB_POST: AI2stbpo<(outs GPR:$base_wb),
                     [(set GPR:$base_wb, (post_truncsti8 GPR:$src,
                                          GPR:$base, am2offset:$offset))]>;
 
+// STRT and STRBT are for disassembly only.
+
+def STRT : AI2stwpo<(outs GPR:$base_wb),
+                    (ins GPR:$src, GPR:$base,am2offset:$offset), 
+                    StFrm, IIC_iStoreru,
+                    "strt", "\t$src, [$base], $offset", "$base = $base_wb",
+                    [/* For disassembly only; pattern left blank */]> {
+  let Inst{21} = 1; // overwrite
+}
+
+def STRBT : AI2stbpo<(outs GPR:$base_wb),
+                     (ins GPR:$src, GPR:$base,am2offset:$offset), 
+                     StFrm, IIC_iStoreru,
+                     "strbt", "\t$src, [$base], $offset", "$base = $base_wb",
+                     [/* For disassembly only; pattern left blank */]> {
+  let Inst{21} = 1; // overwrite
+}
+
 //===----------------------------------------------------------------------===//
 //  Load / store multiple Instructions.
 //
@@ -1015,7 +1158,7 @@ def MOVi16 : AI1<0b1000, (outs GPR:$dst), (ins i32imm:$src),
                  DPFrm, IIC_iMOVi,
                  "movw", "\t$dst, $src",
                  [(set GPR:$dst, imm0_65535:$src)]>,
-                 Requires<[IsARM, HasV6T2]> {
+                 Requires<[IsARM, HasV6T2]>, UnaryDP {
   let Inst{20} = 0;
   let Inst{25} = 1;
 }
@@ -1215,6 +1358,63 @@ def : ARMPat<(add    GPR:$src, so_imm_neg:$imm),
 // (mul X, 2^n+1) -> (add (X << n), X)
 // (mul X, 2^n-1) -> (rsb X, (X << n))
 
+// Saturating adds/subtracts -- for disassembly only
+
+// GPR:$dst = GPR:$a op GPR:$b
+class AQI<bits<8> op27_20, bits<4> op7_4, string opc, list<dag> pattern>
+  : AI<(outs GPR:$dst), (ins GPR:$a, GPR:$b), DPFrm, IIC_iALUr,
+       opc, "\t$dst, $a, $b", pattern> {
+  let Inst{27-20} = op27_20;
+  let Inst{7-4} = op7_4;
+}
+
+def QADD    : AQI<0b00010000, 0b0101, "qadd",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QADD16  : AQI<0b01100010, 0b0001, "qadd16",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QADD8   : AQI<0b01100010, 0b1001, "qadd8",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QASX    : AQI<0b01100010, 0b0011, "qasx",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QDADD   : AQI<0b00010100, 0b0101, "qdadd",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QDSUB   : AQI<0b00010110, 0b0101, "qdsub",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QSAX    : AQI<0b01100010, 0b0101, "qsax",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QSUB    : AQI<0b00010010, 0b0101, "qsub",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QSUB16  : AQI<0b01100010, 0b0111, "qsub16",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def QSUB8   : AQI<0b01100010, 0b1111, "qsub8",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQADD16 : AQI<0b01100110, 0b0001, "uqadd16",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQADD8  : AQI<0b01100110, 0b1001, "uqadd8",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQASX   : AQI<0b01100110, 0b0011, "uqasx",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQSAX   : AQI<0b01100110, 0b0101, "uqsax",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQSUB16 : AQI<0b01100110, 0b0111, "uqsub16",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def UQSUB8  : AQI<0b01100110, 0b1111, "uqsub8",
+                  [/* For disassembly only; pattern left blank */]>;
 
 //===----------------------------------------------------------------------===//
 //  Bitwise Instructions.
@@ -1241,11 +1441,14 @@ def BFC    : I<(outs GPR:$dst), (ins GPR:$src, bf_inv_mask_imm:$imm),
 def  MVNr  : AsI1<0b1111, (outs GPR:$dst), (ins GPR:$src), DPFrm, IIC_iMOVr,
                   "mvn", "\t$dst, $src",
                   [(set GPR:$dst, (not GPR:$src))]>, UnaryDP {
+  let Inst{25} = 0;
   let Inst{11-4} = 0b00000000;
 }
 def  MVNs  : AsI1<0b1111, (outs GPR:$dst), (ins so_reg:$src), DPSoRegFrm,
                   IIC_iMOVsr, "mvn", "\t$dst, $src",
-                  [(set GPR:$dst, (not so_reg:$src))]>, UnaryDP;
+                  [(set GPR:$dst, (not so_reg:$src))]>, UnaryDP {
+  let Inst{25} = 0;
+}
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
 def  MVNi  : AsI1<0b1111, (outs GPR:$dst), (ins so_imm:$imm), DPFrm, 
                   IIC_iMOVi, "mvn", "\t$dst, $imm",
@@ -1442,7 +1645,39 @@ multiclass AI_smla<string opc, PatFrag opnode> {
 defm SMUL : AI_smul<"smul", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
 defm SMLA : AI_smla<"smla", BinOpFrag<(mul node:$LHS, node:$RHS)>>;
 
-// TODO: Halfword multiple accumulate long: SMLAL<x><y>
+// Halfword multiply accumulate long: SMLAL<x><y> -- for disassembly only
+def SMLALBB : AMulxyI<0b0001010,(outs GPR:$ldst,GPR:$hdst),(ins GPR:$a,GPR:$b),
+                      IIC_iMAC64, "smlalbb", "\t$ldst, $hdst, $a, $b",
+                      [/* For disassembly only; pattern left blank */]>,
+              Requires<[IsARM, HasV5TE]> {
+  let Inst{5} = 0;
+  let Inst{6} = 0;
+}
+
+def SMLALBT : AMulxyI<0b0001010,(outs GPR:$ldst,GPR:$hdst),(ins GPR:$a,GPR:$b),
+                      IIC_iMAC64, "smlalbt", "\t$ldst, $hdst, $a, $b",
+                      [/* For disassembly only; pattern left blank */]>,
+              Requires<[IsARM, HasV5TE]> {
+  let Inst{5} = 0;
+  let Inst{6} = 1;
+}
+
+def SMLALTB : AMulxyI<0b0001010,(outs GPR:$ldst,GPR:$hdst),(ins GPR:$a,GPR:$b),
+                      IIC_iMAC64, "smlaltb", "\t$ldst, $hdst, $a, $b",
+                      [/* For disassembly only; pattern left blank */]>,
+              Requires<[IsARM, HasV5TE]> {
+  let Inst{5} = 1;
+  let Inst{6} = 0;
+}
+
+def SMLALTT : AMulxyI<0b0001010,(outs GPR:$ldst,GPR:$hdst),(ins GPR:$a,GPR:$b),
+                      IIC_iMAC64, "smlaltt", "\t$ldst, $hdst, $a, $b",
+                      [/* For disassembly only; pattern left blank */]>,
+              Requires<[IsARM, HasV5TE]> {
+  let Inst{5} = 1;
+  let Inst{6} = 1;
+}
+
 // TODO: Dual halfword multiple: SMUAD, SMUSD, SMLAD, SMLSD, SMLALD, SMLSLD
 
 //===----------------------------------------------------------------------===//
@@ -1773,6 +2008,27 @@ def STREXD : AIstrex<0b01, (outs GPR:$success),
                     []>;
 }
 
+// SWP/SWPB are deprecated in V6/V7 and for disassembly only.
+let mayLoad = 1 in {
+def SWP : AI<(outs GPR:$dst), (ins GPR:$src, GPR:$ptr), LdStExFrm, NoItinerary,
+             "swp", "\t$dst, $src, [$ptr]",
+             [/* For disassembly only; pattern left blank */]> {
+  let Inst{27-23} = 0b00010;
+  let Inst{22} = 0; // B = 0
+  let Inst{21-20} = 0b00;
+  let Inst{7-4} = 0b1001;
+}
+
+def SWPB : AI<(outs GPR:$dst), (ins GPR:$src, GPR:$ptr), LdStExFrm, NoItinerary,
+             "swpb", "\t$dst, $src, [$ptr]",
+             [/* For disassembly only; pattern left blank */]> {
+  let Inst{27-23} = 0b00010;
+  let Inst{22} = 1; // B = 1
+  let Inst{21-20} = 0b00;
+  let Inst{7-4} = 0b1001;
+}
+}
+
 //===----------------------------------------------------------------------===//
 // TLS Instructions
 //
@@ -1797,21 +2053,22 @@ let isCall = 1,
 //   except for our own input by listing the relevant registers in Defs. By
 //   doing so, we also cause the prologue/epilogue code to actively preserve
 //   all of the callee-saved resgisters, which is exactly what we want.
-let Defs = 
+//   A constant value is passed in $val, and we use the location as a scratch.
+let Defs =
   [ R0,  R1,  R2,  R3,  R4,  R5,  R6,  R7,  R8,  R9,  R10, R11, R12, LR,  D0,
     D1,  D2,  D3,  D4,  D5,  D6,  D7,  D8,  D9,  D10, D11, D12, D13, D14, D15,
     D16, D17, D18, D19, D20, D21, D22, D23, D24, D25, D26, D27, D28, D29, D30,
     D31 ] in {
-  def Int_eh_sjlj_setjmp : XI<(outs), (ins GPR:$src),
+  def Int_eh_sjlj_setjmp : XI<(outs), (ins GPR:$src, GPR:$val),
                                AddrModeNone, SizeSpecial, IndexModeNone,
                                Pseudo, NoItinerary,
                                "str\tsp, [$src, #+8] @ eh_setjmp begin\n\t"
-                               "add\tr12, pc, #8\n\t"
-                               "str\tr12, [$src, #+4]\n\t"
+                               "add\t$val, pc, #8\n\t"
+                               "str\t$val, [$src, #+4]\n\t"
                                "mov\tr0, #0\n\t"
                                "add\tpc, pc, #0\n\t"
                                "mov\tr0, #1 @ eh_setjmp end", "",
-                               [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
+                         [(set R0, (ARMeh_sjlj_setjmp GPR:$src, GPR:$val))]>;
 }
 
 //===----------------------------------------------------------------------===//
@@ -1954,3 +2211,116 @@ include "ARMInstrVFP.td"
 //
 
 include "ARMInstrNEON.td"
+
+//===----------------------------------------------------------------------===//
+// Coprocessor Instructions.  For disassembly only.
+//
+
+def CDP : ABI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+            nohash_imm:$CRd, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+            NoItinerary, "cdp", "\tp$cop, $opc1, cr$CRd, cr$CRn, cr$CRm, $opc2",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{4} = 0;
+}
+
+def CDP2 : ABXI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+               nohash_imm:$CRd, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+               NoItinerary, "cdp2\tp$cop, $opc1, cr$CRd, cr$CRn, cr$CRm, $opc2",
+               [/* For disassembly only; pattern left blank */]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{4} = 0;
+}
+
+def MCR : ABI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+              GPR:$Rt, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+              NoItinerary, "mcr", "\tp$cop, $opc1, $Rt, cr$CRn, cr$CRm, $opc2",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{20} = 0;
+  let Inst{4} = 1;
+}
+
+def MCR2 : ABXI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+                GPR:$Rt, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+                NoItinerary, "mcr2\tp$cop, $opc1, $Rt, cr$CRn, cr$CRm, $opc2",
+                [/* For disassembly only; pattern left blank */]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{20} = 0;
+  let Inst{4} = 1;
+}
+
+def MRC : ABI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+              GPR:$Rt, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+              NoItinerary, "mrc", "\tp$cop, $opc1, $Rt, cr$CRn, cr$CRm, $opc2",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{20} = 1;
+  let Inst{4} = 1;
+}
+
+def MRC2 : ABXI<0b1110, (outs), (ins nohash_imm:$cop, i32imm:$opc1,
+                GPR:$Rt, nohash_imm:$CRn, nohash_imm:$CRm, i32imm:$opc2),
+                NoItinerary, "mrc2\tp$cop, $opc1, $Rt, cr$CRn, cr$CRm, $opc2",
+                [/* For disassembly only; pattern left blank */]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{20} = 1;
+  let Inst{4} = 1;
+}
+
+def MCRR : ABI<0b1100, (outs), (ins nohash_imm:$cop, i32imm:$opc,
+               GPR:$Rt, GPR:$Rt2, nohash_imm:$CRm),
+               NoItinerary, "mcrr", "\tp$cop, $opc, $Rt, $Rt2, cr$CRm",
+               [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0100;
+}
+
+def MCRR2 : ABXI<0b1100, (outs), (ins nohash_imm:$cop, i32imm:$opc,
+                 GPR:$Rt, GPR:$Rt2, nohash_imm:$CRm),
+                 NoItinerary, "mcrr2\tp$cop, $opc, $Rt, $Rt2, cr$CRm",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{23-20} = 0b0100;
+}
+
+def MRRC : ABI<0b1100, (outs), (ins nohash_imm:$cop, i32imm:$opc,
+               GPR:$Rt, GPR:$Rt2, nohash_imm:$CRm),
+               NoItinerary, "mrrc", "\tp$cop, $opc, $Rt, $Rt2, cr$CRm",
+               [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0101;
+}
+
+def MRRC2 : ABXI<0b1100, (outs), (ins nohash_imm:$cop, i32imm:$opc,
+                 GPR:$Rt, GPR:$Rt2, nohash_imm:$CRm),
+                 NoItinerary, "mrrc2\tp$cop, $opc, $Rt, $Rt2, cr$CRm",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{31-28} = 0b1111;
+  let Inst{23-20} = 0b0101;
+}
+
+//===----------------------------------------------------------------------===//
+// Move between special register and ARM core register -- for disassembly only
+//
+
+def MRS : ABI<0b0001,(outs GPR:$dst),(ins), NoItinerary, "mrs", "\t$dst, cpsr",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0000;
+  let Inst{7-4} = 0b0000;
+}
+
+def MRSsys : ABI<0b0001,(outs GPR:$dst),(ins), NoItinerary,"mrs","\t$dst, spsr",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0100;
+  let Inst{7-4} = 0b0000;
+}
+
+// FIXME: mask is ignored for the time being.
+def MSR : ABI<0b0001,(outs),(ins GPR:$src), NoItinerary, "mrs", "\tcpsr, $src",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0010;
+  let Inst{7-4} = 0b0000;
+}
+
+// FIXME: mask is ignored for the time being.
+def MSRsys : ABI<0b0001,(outs),(ins GPR:$src),NoItinerary,"mrs","\tspsr, $src",
+              [/* For disassembly only; pattern left blank */]> {
+  let Inst{23-20} = 0b0110;
+  let Inst{7-4} = 0b0000;
+}
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
index cd063bf..e2be7ba 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrNEON.td
@@ -2192,9 +2192,27 @@ def  VBSLq    : N3VX<1, 0, 0b01, 0b0001, 1, 1, (outs QPR:$dst),
 
 //   VBIF     : Vector Bitwise Insert if False
 //              like VBSL but with: "vbif $dst, $src3, $src1", "$src2 = $dst",
+def  VBIFd    : N3VX<1, 0, 0b11, 0b0001, 0, 1,
+                     (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, DPR:$src3),
+                     IIC_VBINiD, "vbif", "$dst, $src2, $src3", "$src1 = $dst",
+                     [/* For disassembly only; pattern left blank */]>;
+def  VBIFq    : N3VX<1, 0, 0b11, 0b0001, 1, 1,
+                     (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, QPR:$src3),
+                     IIC_VBINiQ, "vbif", "$dst, $src2, $src3", "$src1 = $dst",
+                     [/* For disassembly only; pattern left blank */]>;
+
 //   VBIT     : Vector Bitwise Insert if True
 //              like VBSL but with: "vbit $dst, $src2, $src1", "$src3 = $dst",
-// These are not yet implemented.  The TwoAddress pass will not go looking
+def  VBITd    : N3VX<1, 0, 0b10, 0b0001, 0, 1,
+                     (outs DPR:$dst), (ins DPR:$src1, DPR:$src2, DPR:$src3),
+                     IIC_VBINiD, "vbit", "$dst, $src2, $src3", "$src1 = $dst",
+                     [/* For disassembly only; pattern left blank */]>;
+def  VBITq    : N3VX<1, 0, 0b10, 0b0001, 1, 1,
+                     (outs QPR:$dst), (ins QPR:$src1, QPR:$src2, QPR:$src3),
+                     IIC_VBINiQ, "vbit", "$dst, $src2, $src3", "$src1 = $dst",
+                     [/* For disassembly only; pattern left blank */]>;
+
+// VBIT/VBIF are not yet implemented.  The TwoAddress pass will not go looking
 // for equivalent operations with different register constraints; it just
 // inserts copies.
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
index 746caff..64142ad 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb.td
@@ -132,6 +132,14 @@ PseudoInst<(outs), (ins i32imm:$amt), NoItinerary,
            [(ARMcallseq_start imm:$amt)]>, Requires<[IsThumb1Only]>;
 }
 
+// The i32imm operand $val can be used by a debugger to store more information
+// about the breakpoint.
+def tBKPT : T1I<(outs), (ins i32imm:$val), NoItinerary, "bkpt\t$val",
+                [/* For disassembly only; pattern left blank */]>,
+            T1Encoding<0b101111> {
+  let Inst{9-8} = 0b10;
+}
+
 // For both thumb1 and thumb2.
 let isNotDuplicable = 1 in
 def tPICADD : TIt<(outs GPR:$dst), (ins GPR:$lhs, pclabel:$cp), IIC_iALUr,
@@ -775,7 +783,7 @@ def tMOVCCr : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, GPR:$rhs), IIC_iCMOVr,
                     "mov", "\t$dst, $rhs", []>,
               T1Special<{1,0,?,?}>;
 
-def tMOVCCi : T1pIt<(outs GPR:$dst), (ins GPR:$lhs, i32imm:$rhs), IIC_iCMOVi,
+def tMOVCCi : T1pIt<(outs tGPR:$dst), (ins tGPR:$lhs, i32imm:$rhs), IIC_iCMOVi,
                     "mov", "\t$dst, $rhs", []>,
               T1General<{1,0,0,?,?}>;
 
@@ -813,23 +821,20 @@ let isCall = 1,
 //   except for our own input by listing the relevant registers in Defs. By
 //   doing so, we also cause the prologue/epilogue code to actively preserve
 //   all of the callee-saved resgisters, which is exactly what we want.
+//   The current SP is passed in $val, and we reuse the reg as a scratch.
 let Defs =
   [ R0,  R1,  R2,  R3,  R4,  R5,  R6,  R7, R12 ] in {
-  def tInt_eh_sjlj_setjmp : ThumbXI<(outs), (ins GPR:$src),
+  def tInt_eh_sjlj_setjmp : ThumbXI<(outs),(ins tGPR:$src, tGPR:$val),
                               AddrModeNone, SizeSpecial, NoItinerary,
-                              "mov\tr12, r1\t@ begin eh.setjmp\n"
-                              "\tmov\tr1, sp\n"
-                              "\tstr\tr1, [$src, #8]\n"
-                              "\tadr\tr1, 0f\n"
-                              "\tadds\tr1, #1\n"
-                              "\tstr\tr1, [$src, #4]\n"
-                              "\tmov\tr1, r12\n"
+                              "str\t$val, [$src, #8]\t@ begin eh.setjmp\n"
+                              "\tmov\t$val, pc\n"
+                              "\tadds\t$val, #9\n"
+                              "\tstr\t$val, [$src, #4]\n"
                               "\tmovs\tr0, #0\n"
                               "\tb\t1f\n"
-                              ".align 2\n"
-                              "0:\tmovs\tr0, #1\t@ end eh.setjmp\n"
+                              "\tmovs\tr0, #1\t@ end eh.setjmp\n"
                               "1:", "",
-                              [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
+                   [(set R0, (ARMeh_sjlj_setjmp tGPR:$src, tGPR:$val))]>;
 }
 //===----------------------------------------------------------------------===//
 // Non-Instruction Patterns
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
index c7591d2..55c7aa2 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrThumb2.td
@@ -1232,7 +1232,16 @@ def t2UBFX : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
   let Inst{15} = 0;
 }
 
-// FIXME: A8.6.18  BFI - Bitfield insert (Encoding T1)
+// A8.6.18  BFI - Bitfield insert (Encoding T1)
+// Added for disassembler with the pattern field purposely left blank.
+// FIXME: Utilize this instruction in codgen.
+def t2BFI : T2I<(outs GPR:$dst), (ins GPR:$src, imm0_31:$lsb, imm0_31:$width),
+                IIC_iALUi, "bfi", "\t$dst, $src, $lsb, $width", []> {
+  let Inst{31-27} = 0b11110;
+  let Inst{25} = 1;
+  let Inst{24-20} = 0b10110;
+  let Inst{15} = 0;
+}
 
 defm t2ORN  : T2I_bin_irs<0b0011, "orn", BinOpFrag<(or  node:$LHS,
                           (not node:$RHS))>>;
@@ -1808,22 +1817,23 @@ let isCall = 1,
 //   except for our own input by listing the relevant registers in Defs. By
 //   doing so, we also cause the prologue/epilogue code to actively preserve
 //   all of the callee-saved resgisters, which is exactly what we want.
-let Defs = 
+//   The current SP is passed in $val, and we reuse the reg as a scratch.
+let Defs =
   [ R0,  R1,  R2,  R3,  R4,  R5,  R6,  R7,  R8,  R9,  R10, R11, R12, LR,  D0,
     D1,  D2,  D3,  D4,  D5,  D6,  D7,  D8,  D9,  D10, D11, D12, D13, D14, D15,
     D16, D17, D18, D19, D20, D21, D22, D23, D24, D25, D26, D27, D28, D29, D30,
     D31 ] in {
-  def t2Int_eh_sjlj_setjmp : Thumb2XI<(outs), (ins GPR:$src),
+  def t2Int_eh_sjlj_setjmp : Thumb2XI<(outs), (ins GPR:$src, tGPR:$val),
                                AddrModeNone, SizeSpecial, NoItinerary,
-                               "str.w\tsp, [$src, #+8] @ eh_setjmp begin\n"
-                               "\tadr\tr12, 0f\n"
-                               "\torr.w\tr12, r12, #1\n"
-                               "\tstr.w\tr12, [$src, #+4]\n"
+                               "str\t$val, [$src, #8]\t@ begin eh.setjmp\n"
+                               "\tmov\t$val, pc\n"
+                               "\tadds\t$val, #9\n"
+                               "\tstr\t$val, [$src, #4]\n"
                                "\tmovs\tr0, #0\n"
                                "\tb\t1f\n"
-                               "0:\tmovs\tr0, #1 @ eh_setjmp end\n"
+                               "\tmovs\tr0, #1\t@ end eh.setjmp\n"
                                "1:", "",
-                               [(set R0, (ARMeh_sjlj_setjmp GPR:$src))]>;
+                          [(set R0, (ARMeh_sjlj_setjmp GPR:$src, tGPR:$val))]>;
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
index 5bfe89d..e516593 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMInstrVFP.td
@@ -114,52 +114,56 @@ def VSTMS : AXSI5<(outs), (ins addrmode5:$addr, pred:$p, reglist:$wb,
 // FP Binary Operations.
 //
 
-def VADDD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+def VADDD  : ADbI<0b11100, 0b11, 0, 0, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
                  IIC_fpALU64, "vadd", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fadd DPR:$a, DPR:$b))]>;
 
-def VADDS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+def VADDS  : ASbIn<0b11100, 0b11, 0, 0, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
                   IIC_fpALU32, "vadd", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fadd SPR:$a, SPR:$b))]>;
 
 // These are encoded as unary instructions.
 let Defs = [FPSCR] in {
-def VCMPED : ADuI<0b11101011, 0b0100, 0b1100, (outs), (ins DPR:$a, DPR:$b),
+def VCMPED : ADuI<0b11101, 0b11, 0b0100, 0b11, 0, (outs), (ins DPR:$a, DPR:$b),
                  IIC_fpCMP64, "vcmpe", ".f64\t$a, $b",
                  [(arm_cmpfp DPR:$a, DPR:$b)]>;
 
-def VCMPES : ASuI<0b11101011, 0b0100, 0b1100, (outs), (ins SPR:$a, SPR:$b),
+def VCMPD  : ADuI<0b11101, 0b11, 0b0100, 0b01, 0, (outs), (ins DPR:$a, DPR:$b),
+                 IIC_fpCMP64, "vcmp", ".f64\t$a, $b",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VCMPES : ASuI<0b11101, 0b11, 0b0100, 0b11, 0, (outs), (ins SPR:$a, SPR:$b),
                  IIC_fpCMP32, "vcmpe", ".f32\t$a, $b",
                  [(arm_cmpfp SPR:$a, SPR:$b)]>;
+
+def VCMPS  : ASuI<0b11101, 0b11, 0b0100, 0b01, 0, (outs), (ins SPR:$a, SPR:$b),
+                 IIC_fpCMP32, "vcmp", ".f32\t$a, $b",
+                 [/* For disassembly only; pattern left blank */]>;
 }
 
-def VDIVD  : ADbI<0b11101000, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+def VDIVD  : ADbI<0b11101, 0b00, 0, 0, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
                  IIC_fpDIV64, "vdiv", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fdiv DPR:$a, DPR:$b))]>;
 
-def VDIVS  : ASbI<0b11101000, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+def VDIVS  : ASbI<0b11101, 0b00, 0, 0, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
                  IIC_fpDIV32, "vdiv", ".f32\t$dst, $a, $b",
                  [(set SPR:$dst, (fdiv SPR:$a, SPR:$b))]>;
 
-def VMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+def VMULD  : ADbI<0b11100, 0b10, 0, 0, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
                  IIC_fpMUL64, "vmul", ".f64\t$dst, $a, $b",
                  [(set DPR:$dst, (fmul DPR:$a, DPR:$b))]>;
 
-def VMULS  : ASbIn<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+def VMULS  : ASbIn<0b11100, 0b10, 0, 0, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
                   IIC_fpMUL32, "vmul", ".f32\t$dst, $a, $b",
                   [(set SPR:$dst, (fmul SPR:$a, SPR:$b))]>;
 
-def VNMULD  : ADbI<0b11100010, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+def VNMULD  : ADbI<0b11100, 0b10, 1, 0, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
                   IIC_fpMUL64, "vnmul", ".f64\t$dst, $a, $b",
-                  [(set DPR:$dst, (fneg (fmul DPR:$a, DPR:$b)))]> {
-  let Inst{6} = 1;
-}
+                  [(set DPR:$dst, (fneg (fmul DPR:$a, DPR:$b)))]>;
 
-def VNMULS  : ASbI<0b11100010, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+def VNMULS  : ASbI<0b11100, 0b10, 1, 0, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
                   IIC_fpMUL32, "vnmul", ".f32\t$dst, $a, $b",
-                  [(set SPR:$dst, (fneg (fmul SPR:$a, SPR:$b)))]> {
-  let Inst{6} = 1;
-}
+                  [(set SPR:$dst, (fneg (fmul SPR:$a, SPR:$b)))]>;
 
 // Match reassociated forms only if not sign dependent rounding.
 def : Pat<(fmul (fneg DPR:$a), DPR:$b),
@@ -168,41 +172,45 @@ def : Pat<(fmul (fneg SPR:$a), SPR:$b),
           (VNMULS SPR:$a, SPR:$b)>, Requires<[NoHonorSignDependentRounding]>;
 
 
-def VSUBD  : ADbI<0b11100011, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
+def VSUBD  : ADbI<0b11100, 0b11, 1, 0, (outs DPR:$dst), (ins DPR:$a, DPR:$b),
                  IIC_fpALU64, "vsub", ".f64\t$dst, $a, $b",
-                 [(set DPR:$dst, (fsub DPR:$a, DPR:$b))]> {
-  let Inst{6} = 1;
-}
+                 [(set DPR:$dst, (fsub DPR:$a, DPR:$b))]>;
 
-def VSUBS  : ASbIn<0b11100011, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
+def VSUBS  : ASbIn<0b11100, 0b11, 1, 0, (outs SPR:$dst), (ins SPR:$a, SPR:$b),
                   IIC_fpALU32, "vsub", ".f32\t$dst, $a, $b",
-                  [(set SPR:$dst, (fsub SPR:$a, SPR:$b))]> {
-  let Inst{6} = 1;
-}
+                  [(set SPR:$dst, (fsub SPR:$a, SPR:$b))]>;
 
 //===----------------------------------------------------------------------===//
 // FP Unary Operations.
 //
 
-def VABSD  : ADuI<0b11101011, 0b0000, 0b1100, (outs DPR:$dst), (ins DPR:$a),
+def VABSD  : ADuI<0b11101, 0b11, 0b0000, 0b11, 0, (outs DPR:$dst), (ins DPR:$a),
                  IIC_fpUNA64, "vabs", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fabs DPR:$a))]>;
 
-def VABSS  : ASuIn<0b11101011, 0b0000, 0b1100, (outs SPR:$dst), (ins SPR:$a),
+def VABSS  : ASuIn<0b11101, 0b11, 0b0000, 0b11, 0,(outs SPR:$dst), (ins SPR:$a),
                   IIC_fpUNA32, "vabs", ".f32\t$dst, $a",
                   [(set SPR:$dst, (fabs SPR:$a))]>;
 
 let Defs = [FPSCR] in {
-def VCMPEZD : ADuI<0b11101011, 0b0101, 0b1100, (outs), (ins DPR:$a),
+def VCMPEZD : ADuI<0b11101, 0b11, 0b0101, 0b11, 0, (outs), (ins DPR:$a),
                   IIC_fpCMP64, "vcmpe", ".f64\t$a, #0",
                   [(arm_cmpfp0 DPR:$a)]>;
 
-def VCMPEZS : ASuI<0b11101011, 0b0101, 0b1100, (outs), (ins SPR:$a),
+def VCMPZD  : ADuI<0b11101, 0b11, 0b0101, 0b01, 0, (outs), (ins DPR:$a),
+                  IIC_fpCMP64, "vcmp", ".f64\t$a, #0",
+                  [/* For disassembly only; pattern left blank */]>;
+
+def VCMPEZS : ASuI<0b11101, 0b11, 0b0101, 0b11, 0, (outs), (ins SPR:$a),
                   IIC_fpCMP32, "vcmpe", ".f32\t$a, #0",
                   [(arm_cmpfp0 SPR:$a)]>;
+
+def VCMPZS  : ASuI<0b11101, 0b11, 0b0101, 0b01, 0, (outs), (ins SPR:$a),
+                  IIC_fpCMP32, "vcmp", ".f32\t$a, #0",
+                  [/* For disassembly only; pattern left blank */]>;
 }
 
-def VCVTDS : ASuI<0b11101011, 0b0111, 0b1100, (outs DPR:$dst), (ins SPR:$a),
+def VCVTDS : ASuI<0b11101, 0b11, 0b0111, 0b11, 0, (outs DPR:$dst), (ins SPR:$a),
                  IIC_fpCVTDS, "vcvt", ".f64.f32\t$dst, $a",
                  [(set DPR:$dst, (fextend SPR:$a))]>;
 
@@ -213,30 +221,49 @@ def VCVTSD : VFPAI<(outs SPR:$dst), (ins DPR:$a), VFPUnaryFrm,
   let Inst{27-23} = 0b11101;
   let Inst{21-16} = 0b110111;
   let Inst{11-8}  = 0b1011;
-  let Inst{7-4}   = 0b1100;
+  let Inst{7-6}   = 0b11;
+  let Inst{4}     = 0;
 }
 
+// Between half-precision and single-precision.  For disassembly only.
+
+def VCVTBSH : ASuI<0b11101, 0b11, 0b0010, 0b01, 0, (outs SPR:$dst), (ins SPR:$a),
+                 /* FIXME */ IIC_fpCVTDS, "vcvtb", ".f32.f16\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VCVTBHS : ASuI<0b11101, 0b11, 0b0011, 0b01, 0, (outs SPR:$dst), (ins SPR:$a),
+                 /* FIXME */ IIC_fpCVTDS, "vcvtb", ".f16.f32\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VCVTTSH : ASuI<0b11101, 0b11, 0b0010, 0b11, 0, (outs SPR:$dst), (ins SPR:$a),
+                 /* FIXME */ IIC_fpCVTDS, "vcvtt", ".f32.f16\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VCVTTHS : ASuI<0b11101, 0b11, 0b0011, 0b11, 0, (outs SPR:$dst), (ins SPR:$a),
+                 /* FIXME */ IIC_fpCVTDS, "vcvtt", ".f16.f32\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]>;
+
 let neverHasSideEffects = 1 in {
-def VMOVD: ADuI<0b11101011, 0b0000, 0b0100, (outs DPR:$dst), (ins DPR:$a),
+def VMOVD: ADuI<0b11101, 0b11, 0b0000, 0b01, 0, (outs DPR:$dst), (ins DPR:$a),
                  IIC_fpUNA64, "vmov", ".f64\t$dst, $a", []>;
 
-def VMOVS: ASuI<0b11101011, 0b0000, 0b0100, (outs SPR:$dst), (ins SPR:$a),
+def VMOVS: ASuI<0b11101, 0b11, 0b0000, 0b01, 0, (outs SPR:$dst), (ins SPR:$a),
                  IIC_fpUNA32, "vmov", ".f32\t$dst, $a", []>;
 } // neverHasSideEffects
 
-def VNEGD  : ADuI<0b11101011, 0b0001, 0b0100, (outs DPR:$dst), (ins DPR:$a),
+def VNEGD  : ADuI<0b11101, 0b11, 0b0001, 0b01, 0, (outs DPR:$dst), (ins DPR:$a),
                  IIC_fpUNA64, "vneg", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fneg DPR:$a))]>;
 
-def VNEGS  : ASuIn<0b11101011, 0b0001, 0b0100, (outs SPR:$dst), (ins SPR:$a),
+def VNEGS  : ASuIn<0b11101, 0b11, 0b0001, 0b01, 0,(outs SPR:$dst), (ins SPR:$a),
                   IIC_fpUNA32, "vneg", ".f32\t$dst, $a",
                   [(set SPR:$dst, (fneg SPR:$a))]>;
 
-def VSQRTD  : ADuI<0b11101011, 0b0001, 0b1100, (outs DPR:$dst), (ins DPR:$a),
+def VSQRTD : ADuI<0b11101, 0b11, 0b0001, 0b11, 0, (outs DPR:$dst), (ins DPR:$a),
                  IIC_fpSQRT64, "vsqrt", ".f64\t$dst, $a",
                  [(set DPR:$dst, (fsqrt DPR:$a))]>;
 
-def VSQRTS  : ASuI<0b11101011, 0b0001, 0b1100, (outs SPR:$dst), (ins SPR:$a),
+def VSQRTS : ASuI<0b11101, 0b11, 0b0001, 0b11, 0, (outs SPR:$dst), (ins SPR:$a),
                  IIC_fpSQRT32, "vsqrt", ".f32\t$dst, $a",
                  [(set SPR:$dst, (fsqrt SPR:$a))]>;
 
@@ -255,7 +282,16 @@ def VMOVSR : AVConv4I<0b11100000, 0b1010, (outs SPR:$dst), (ins GPR:$src),
 def VMOVRRD  : AVConv3I<0b11000101, 0b1011,
                       (outs GPR:$wb, GPR:$dst2), (ins DPR:$src),
                  IIC_VMOVDI, "vmov", "\t$wb, $dst2, $src",
-                 [/* FIXME: Can't write pattern for multiple result instr*/]>;
+                 [/* FIXME: Can't write pattern for multiple result instr*/]> {
+  let Inst{7-6} = 0b00;
+}
+
+def VMOVRRS  : AVConv3I<0b11000101, 0b1010,
+                      (outs GPR:$wb, GPR:$dst2), (ins SPR:$src1, SPR:$src2),
+                 IIC_VMOVDI, "vmov", "\t$wb, $dst2, $src1, $src2",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{7-6} = 0b00;
+}
 
 // FMDHR: GPR -> SPR
 // FMDLR: GPR -> SPR
@@ -263,7 +299,16 @@ def VMOVRRD  : AVConv3I<0b11000101, 0b1011,
 def VMOVDRR : AVConv5I<0b11000100, 0b1011,
                      (outs DPR:$dst), (ins GPR:$src1, GPR:$src2),
                 IIC_VMOVID, "vmov", "\t$dst, $src1, $src2",
-                [(set DPR:$dst, (arm_fmdrr GPR:$src1, GPR:$src2))]>;
+                [(set DPR:$dst, (arm_fmdrr GPR:$src1, GPR:$src2))]> {
+  let Inst{7-6} = 0b00;
+}
+
+def VMOVSRR : AVConv5I<0b11000100, 0b1010,
+                     (outs SPR:$dst1, SPR:$dst2), (ins GPR:$src1, GPR:$src2),
+                IIC_VMOVID, "vmov", "\t$dst1, $dst2, $src1, $src2",
+                [/* For disassembly only; pattern left blank */]> {
+  let Inst{7-6} = 0b00;
+}
 
 // FMRDH: SPR -> GPR
 // FMRDL: SPR -> GPR
@@ -277,137 +322,271 @@ def VMOVDRR : AVConv5I<0b11000100, 0b1011,
 
 // Int to FP:
 
-def VSITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
+def VSITOD : AVConv1I<0b11101, 0b11, 0b1000, 0b1011,
+                 (outs DPR:$dst), (ins SPR:$a),
                  IIC_fpCVTID, "vcvt", ".f64.s32\t$dst, $a",
                  [(set DPR:$dst, (arm_sitof SPR:$a))]> {
-  let Inst{7} = 1;
+  let Inst{7} = 1; // s32
 }
 
-def VSITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
+def VSITOS : AVConv1In<0b11101, 0b11, 0b1000, 0b1010,
+                 (outs SPR:$dst),(ins SPR:$a),
                  IIC_fpCVTIS, "vcvt", ".f32.s32\t$dst, $a",
                  [(set SPR:$dst, (arm_sitof SPR:$a))]> {
-  let Inst{7} = 1;
+  let Inst{7} = 1; // s32
 }
 
-def VUITOD : AVConv1I<0b11101011, 0b1000, 0b1011, (outs DPR:$dst), (ins SPR:$a),
+def VUITOD : AVConv1I<0b11101, 0b11, 0b1000, 0b1011,
+                 (outs DPR:$dst), (ins SPR:$a),
                  IIC_fpCVTID, "vcvt", ".f64.u32\t$dst, $a",
-                 [(set DPR:$dst, (arm_uitof SPR:$a))]>;
+                 [(set DPR:$dst, (arm_uitof SPR:$a))]> {
+  let Inst{7} = 0; // u32
+}
 
-def VUITOS : AVConv1In<0b11101011, 0b1000, 0b1010, (outs SPR:$dst),(ins SPR:$a),
+def VUITOS : AVConv1In<0b11101, 0b11, 0b1000, 0b1010,
+                 (outs SPR:$dst), (ins SPR:$a),
                  IIC_fpCVTIS, "vcvt", ".f32.u32\t$dst, $a",
-                 [(set SPR:$dst, (arm_uitof SPR:$a))]>;
+                 [(set SPR:$dst, (arm_uitof SPR:$a))]> {
+  let Inst{7} = 0; // u32
+}
 
 // FP to Int:
 // Always set Z bit in the instruction, i.e. "round towards zero" variants.
 
-def VTOSIZD : AVConv1I<0b11101011, 0b1101, 0b1011,
+def VTOSIZD : AVConv1I<0b11101, 0b11, 0b1101, 0b1011,
                        (outs SPR:$dst), (ins DPR:$a),
                  IIC_fpCVTDI, "vcvt", ".s32.f64\t$dst, $a",
                  [(set SPR:$dst, (arm_ftosi DPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def VTOSIZS : AVConv1In<0b11101011, 0b1101, 0b1010,
+def VTOSIZS : AVConv1In<0b11101, 0b11, 0b1101, 0b1010,
                         (outs SPR:$dst), (ins SPR:$a),
                  IIC_fpCVTSI, "vcvt", ".s32.f32\t$dst, $a",
                  [(set SPR:$dst, (arm_ftosi SPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def VTOUIZD : AVConv1I<0b11101011, 0b1100, 0b1011,
+def VTOUIZD : AVConv1I<0b11101, 0b11, 0b1100, 0b1011,
                        (outs SPR:$dst), (ins DPR:$a),
                  IIC_fpCVTDI, "vcvt", ".u32.f64\t$dst, $a",
                  [(set SPR:$dst, (arm_ftoui DPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
-def VTOUIZS : AVConv1In<0b11101011, 0b1100, 0b1010,
+def VTOUIZS : AVConv1In<0b11101, 0b11, 0b1100, 0b1010,
                         (outs SPR:$dst), (ins SPR:$a),
                  IIC_fpCVTSI, "vcvt", ".u32.f32\t$dst, $a",
                  [(set SPR:$dst, (arm_ftoui SPR:$a))]> {
   let Inst{7} = 1; // Z bit
 }
 
+// And the Z bit '0' variants, i.e. use the rounding mode specified by FPSCR.
+// For disassembly only.
+
+def VTOSIRD : AVConv1I<0b11101, 0b11, 0b1101, 0b1011,
+                       (outs SPR:$dst), (ins DPR:$a),
+                 IIC_fpCVTDI, "vcvtr", ".s32.f64\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{7} = 0; // Z bit
+}
+
+def VTOSIRS : AVConv1In<0b11101, 0b11, 0b1101, 0b1010,
+                        (outs SPR:$dst), (ins SPR:$a),
+                 IIC_fpCVTSI, "vcvtr", ".s32.f32\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{7} = 0; // Z bit
+}
+
+def VTOUIRD : AVConv1I<0b11101, 0b11, 0b1100, 0b1011,
+                       (outs SPR:$dst), (ins DPR:$a),
+                 IIC_fpCVTDI, "vcvtr", ".u32.f64\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{7} = 0; // Z bit
+}
+
+def VTOUIRS : AVConv1In<0b11101, 0b11, 0b1100, 0b1010,
+                        (outs SPR:$dst), (ins SPR:$a),
+                 IIC_fpCVTSI, "vcvtr", ".u32.f32\t$dst, $a",
+                 [/* For disassembly only; pattern left blank */]> {
+  let Inst{7} = 0; // Z bit
+}
+
+// Convert between floating-point and fixed-point
+// Data type for fixed-point naming convention:
+//   S16 (U=0, sx=0) -> SH
+//   U16 (U=1, sx=0) -> UH
+//   S32 (U=0, sx=1) -> SL
+//   U32 (U=1, sx=1) -> UL
+
+let Constraints = "$a = $dst" in {
+
+// FP to Fixed-Point:
+
+def VTOSHS : AVConv1XI<0b11101, 0b11, 0b1110, 0b1010, 0,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTSI, "vcvt", ".s16.f32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOUHS : AVConv1XI<0b11101, 0b11, 0b1111, 0b1010, 0,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTSI, "vcvt", ".u16.f32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOSLS : AVConv1XI<0b11101, 0b11, 0b1110, 0b1010, 1,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTSI, "vcvt", ".s32.f32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOULS : AVConv1XI<0b11101, 0b11, 0b1111, 0b1010, 1,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTSI, "vcvt", ".u32.f32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOSHD : AVConv1XI<0b11101, 0b11, 0b1110, 0b1011, 0,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTDI, "vcvt", ".s16.f64\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOUHD : AVConv1XI<0b11101, 0b11, 0b1111, 0b1011, 0,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTDI, "vcvt", ".u16.f64\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOSLD : AVConv1XI<0b11101, 0b11, 0b1110, 0b1011, 1,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTDI, "vcvt", ".s32.f64\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VTOULD : AVConv1XI<0b11101, 0b11, 0b1111, 0b1011, 1,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTDI, "vcvt", ".u32.f64\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+// Fixed-Point to FP:
+
+def VSHTOS : AVConv1XI<0b11101, 0b11, 0b1010, 0b1010, 0,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTIS, "vcvt", ".f32.s16\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VUHTOS : AVConv1XI<0b11101, 0b11, 0b1011, 0b1010, 0,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTIS, "vcvt", ".f32.u16\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VSLTOS : AVConv1XI<0b11101, 0b11, 0b1010, 0b1010, 1,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTIS, "vcvt", ".f32.s32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VULTOS : AVConv1XI<0b11101, 0b11, 0b1011, 0b1010, 1,
+                       (outs SPR:$dst), (ins SPR:$a, i32imm:$fbits),
+                 IIC_fpCVTIS, "vcvt", ".f32.u32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VSHTOD : AVConv1XI<0b11101, 0b11, 0b1010, 0b1011, 0,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTID, "vcvt", ".f64.s16\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VUHTOD : AVConv1XI<0b11101, 0b11, 0b1011, 0b1011, 0,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTID, "vcvt", ".f64.u16\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VSLTOD : AVConv1XI<0b11101, 0b11, 0b1010, 0b1011, 1,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTID, "vcvt", ".f64.s32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+def VULTOD : AVConv1XI<0b11101, 0b11, 0b1011, 0b1011, 1,
+                       (outs DPR:$dst), (ins DPR:$a, i32imm:$fbits),
+                 IIC_fpCVTID, "vcvt", ".f64.u32\t$dst, $a, $fbits",
+                 [/* For disassembly only; pattern left blank */]>;
+
+} // End of 'let Constraints = "$src = $dst" in'
+
 //===----------------------------------------------------------------------===//
 // FP FMA Operations.
 //
 
-def VMLAD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+def VMLAD : ADbI<0b11100, 0b00, 0, 0,
+                (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
                 IIC_fpMAC64, "vmla", ".f64\t$dst, $a, $b",
                 [(set DPR:$dst, (fadd (fmul DPR:$a, DPR:$b), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def VMLAS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+def VMLAS : ASbIn<0b11100, 0b00, 0, 0,
+                 (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
                  IIC_fpMAC32, "vmla", ".f32\t$dst, $a, $b",
                  [(set SPR:$dst, (fadd (fmul SPR:$a, SPR:$b), SPR:$dstin))]>,
                  RegConstraint<"$dstin = $dst">;
 
-def VNMLSD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+def VNMLSD : ADbI<0b11100, 0b01, 0, 0,
+                (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
                 IIC_fpMAC64, "vnmls", ".f64\t$dst, $a, $b",
                 [(set DPR:$dst, (fsub (fmul DPR:$a, DPR:$b), DPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def VNMLSS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+def VNMLSS : ASbI<0b11100, 0b01, 0, 0,
+                (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
                 IIC_fpMAC32, "vnmls", ".f32\t$dst, $a, $b",
                 [(set SPR:$dst, (fsub (fmul SPR:$a, SPR:$b), SPR:$dstin))]>,
                 RegConstraint<"$dstin = $dst">;
 
-def VMLSD : ADbI<0b11100000, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+def VMLSD : ADbI<0b11100, 0b00, 1, 0,
+                 (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
                  IIC_fpMAC64, "vmls", ".f64\t$dst, $a, $b",
              [(set DPR:$dst, (fadd (fneg (fmul DPR:$a, DPR:$b)), DPR:$dstin))]>,
-                RegConstraint<"$dstin = $dst"> {
-  let Inst{6} = 1;
-}
+                RegConstraint<"$dstin = $dst">;
 
-def VMLSS : ASbIn<0b11100000, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+def VMLSS : ASbIn<0b11100, 0b00, 1, 0,
+                  (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
                   IIC_fpMAC32, "vmls", ".f32\t$dst, $a, $b",
              [(set SPR:$dst, (fadd (fneg (fmul SPR:$a, SPR:$b)), SPR:$dstin))]>,
-                RegConstraint<"$dstin = $dst"> {
-  let Inst{6} = 1;
-}
+                RegConstraint<"$dstin = $dst">;
 
 def : Pat<(fsub DPR:$dstin, (fmul DPR:$a, DPR:$b)),
           (VMLSD DPR:$dstin, DPR:$a, DPR:$b)>, Requires<[DontUseNEONForFP]>;
 def : Pat<(fsub SPR:$dstin, (fmul SPR:$a, SPR:$b)),
           (VMLSS SPR:$dstin, SPR:$a, SPR:$b)>, Requires<[DontUseNEONForFP]>;
 
-def VNMLAD : ADbI<0b11100001, (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
+def VNMLAD : ADbI<0b11100, 0b01, 1, 0,
+                 (outs DPR:$dst), (ins DPR:$dstin, DPR:$a, DPR:$b),
                  IIC_fpMAC64, "vnmla", ".f64\t$dst, $a, $b",
              [(set DPR:$dst, (fsub (fneg (fmul DPR:$a, DPR:$b)), DPR:$dstin))]>,
-                RegConstraint<"$dstin = $dst"> {
-  let Inst{6} = 1;
-}
+                RegConstraint<"$dstin = $dst">;
 
-def VNMLAS : ASbI<0b11100001, (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
+def VNMLAS : ASbI<0b11100, 0b01, 1, 0,
+                (outs SPR:$dst), (ins SPR:$dstin, SPR:$a, SPR:$b),
                 IIC_fpMAC32, "vnmla", ".f32\t$dst, $a, $b",
              [(set SPR:$dst, (fsub (fneg (fmul SPR:$a, SPR:$b)), SPR:$dstin))]>,
-                RegConstraint<"$dstin = $dst"> {
-  let Inst{6} = 1;
-}
+                RegConstraint<"$dstin = $dst">;
 
 //===----------------------------------------------------------------------===//
 // FP Conditional moves.
 //
 
-def VMOVDcc  : ADuI<0b11101011, 0b0000, 0b0100,
+def VMOVDcc  : ADuI<0b11101, 0b11, 0b0000, 0b01, 0,
                     (outs DPR:$dst), (ins DPR:$false, DPR:$true),
                     IIC_fpUNA64, "vmov", ".f64\t$dst, $true",
                 [/*(set DPR:$dst, (ARMcmov DPR:$false, DPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def VMOVScc  : ASuI<0b11101011, 0b0000, 0b0100,
+def VMOVScc  : ASuI<0b11101, 0b11, 0b0000, 0b01, 0,
                     (outs SPR:$dst), (ins SPR:$false, SPR:$true),
                     IIC_fpUNA32, "vmov", ".f32\t$dst, $true",
                 [/*(set SPR:$dst, (ARMcmov SPR:$false, SPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def VNEGDcc  : ADuI<0b11101011, 0b0001, 0b0100,
+def VNEGDcc  : ADuI<0b11101, 0b11, 0b0001, 0b01, 0,
                     (outs DPR:$dst), (ins DPR:$false, DPR:$true),
                     IIC_fpUNA64, "vneg", ".f64\t$dst, $true",
                 [/*(set DPR:$dst, (ARMcneg DPR:$false, DPR:$true, imm:$cc))*/]>,
                     RegConstraint<"$false = $dst">;
 
-def VNEGScc  : ASuI<0b11101011, 0b0001, 0b0100,
+def VNEGScc  : ASuI<0b11101, 0b11, 0b0001, 0b01, 0,
                     (outs SPR:$dst), (ins SPR:$false, SPR:$true),
                     IIC_fpUNA32, "vneg", ".f32\t$dst, $true",
                 [/*(set SPR:$dst, (ARMcneg SPR:$false, SPR:$true, imm:$cc))*/]>,
@@ -432,6 +611,31 @@ def FMSTAT : VFPAI<(outs), (ins), VFPMiscFrm, IIC_fpSTAT, "vmrs",
   let Inst{4}     = 1;
 }
 
+// FPSCR <-> GPR (for disassembly only)
+
+let Uses = [FPSCR] in {
+def VMRS : VFPAI<(outs GPR:$dst), (ins), VFPMiscFrm, IIC_fpSTAT, "vmrs",
+                 "\t$dst, fpscr",
+             [/* For disassembly only; pattern left blank */]> {
+  let Inst{27-20} = 0b11101111;
+  let Inst{19-16} = 0b0001;
+  let Inst{11-8}  = 0b1010;
+  let Inst{7}     = 0;
+  let Inst{4}     = 1;
+}
+}
+
+let Defs = [FPSCR] in {
+def VMSR : VFPAI<(outs), (ins GPR:$src), VFPMiscFrm, IIC_fpSTAT, "vmsr",
+                 "\tfpscr, $src",
+             [/* For disassembly only; pattern left blank */]> {
+  let Inst{27-20} = 0b11101110;
+  let Inst{19-16} = 0b0001;
+  let Inst{11-8}  = 0b1010;
+  let Inst{7}     = 0;
+  let Inst{4}     = 1;
+}
+}
 
 // Materialize FP immediates. VFP3 only.
 let isReMaterializable = 1 in {
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
index b78b95b..4e2d181 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
@@ -350,7 +350,8 @@ ARMLoadStoreOpt::MergeLDR_STR(MachineBasicBlock &MBB, unsigned SIndex,
       : ARMRegisterInfo::getRegisterNumbering(Reg);
     // AM4 - register numbers in ascending order.
     // AM5 - consecutive register numbers in ascending order.
-    if (NewOffset == Offset + (int)Size &&
+    if (Reg != ARM::SP &&
+        NewOffset == Offset + (int)Size &&
         ((isAM4 && RegNum > PRegNum) || RegNum == PRegNum+1)) {
       Offset += Size;
       PRegNum = RegNum;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMMCAsmInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMMCAsmInfo.cpp
index 86693b6..ccd6add 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMMCAsmInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMMCAsmInfo.cpp
@@ -52,15 +52,16 @@ ARMMCAsmInfoDarwin::ARMMCAsmInfoDarwin() {
 }
 
 ARMELFMCAsmInfo::ARMELFMCAsmInfo() {
+  // ".comm align is in bytes but .align is pow-2."
+  AlignmentIsInBytes = false;
+
   Data64bitsDirective = 0;
   CommentString = "@";
 
-  NeedsSet = false;
   HasLEB128 = true;
   AbsoluteDebugSectionOffsets = true;
   PrivateGlobalPrefix = ".L";
   WeakRefDirective = "\t.weak\t";
-  SetDirective = "\t.set\t";
   HasLCOMMDirective = true;
 
   DwarfRequiresFrameSection = false;
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h b/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
index 2176b27..c998ede 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMMachineFunctionInfo.h
@@ -35,11 +35,6 @@ class ARMFunctionInfo : public MachineFunctionInfo {
   /// 'isThumb'.
   bool hasThumb2;
 
-  /// Align - required alignment.  ARM functions and Thumb functions with
-  /// constant pools require 4-byte alignment; other Thumb functions
-  /// require only 2-byte alignment.
-  unsigned Align;
-
   /// VarArgsRegSaveSize - Size of the register save area for vararg functions.
   ///
   unsigned VarArgsRegSaveSize;
@@ -94,7 +89,6 @@ public:
   ARMFunctionInfo() :
     isThumb(false),
     hasThumb2(false),
-    Align(2U),
     VarArgsRegSaveSize(0), HasStackFrame(false),
     LRSpilledForFarJump(false),
     FramePtrSpillOffset(0), GPRCS1Offset(0), GPRCS2Offset(0), DPRCSOffset(0),
@@ -105,7 +99,6 @@ public:
   explicit ARMFunctionInfo(MachineFunction &MF) :
     isThumb(MF.getTarget().getSubtarget<ARMSubtarget>().isThumb()),
     hasThumb2(MF.getTarget().getSubtarget<ARMSubtarget>().hasThumb2()),
-    Align(isThumb ? 1U : 2U),
     VarArgsRegSaveSize(0), HasStackFrame(false),
     LRSpilledForFarJump(false),
     FramePtrSpillOffset(0), GPRCS1Offset(0), GPRCS2Offset(0), DPRCSOffset(0),
@@ -118,9 +111,6 @@ public:
   bool isThumb1OnlyFunction() const { return isThumb && !hasThumb2; }
   bool isThumb2Function() const { return isThumb && hasThumb2; }
 
-  unsigned getAlign() const { return Align; }
-  void setAlign(unsigned a) { Align = a; }
-
   unsigned getVarArgsRegSaveSize() const { return VarArgsRegSaveSize; }
   void setVarArgsRegSaveSize(unsigned s) { VarArgsRegSaveSize = s; }
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
index 71f3883..426862c 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMSubtarget.cpp
@@ -122,9 +122,9 @@ ARMSubtarget::GVIsIndirectSymbol(GlobalValue *GV, Reloc::Model RelocM) const {
   if (RelocM == Reloc::Static)
     return false;
 
-  // GV with ghost linkage (in JIT lazy compilation mode) do not require an
-  // extra load from stub.
-  bool isDecl = GV->isDeclaration() && !GV->hasNotBeenReadFromBitcode();
+  // Materializable GVs (in JIT lazy compilation mode) do not require an extra
+  // load from stub.
+  bool isDecl = GV->isDeclaration() && !GV->isMaterializable();
 
   if (!isTargetDarwin()) {
     // Extra load is needed for all externally visible.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
index 4d20a5c..7233f5c 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.cpp
@@ -133,18 +133,6 @@ bool ARMBaseTargetMachine::addPreEmitPass(PassManagerBase &PM,
 
 bool ARMBaseTargetMachine::addCodeEmitter(PassManagerBase &PM,
                                           CodeGenOpt::Level OptLevel,
-                                          MachineCodeEmitter &MCE) {
-  // FIXME: Move this to TargetJITInfo!
-  if (DefRelocModel == Reloc::Default)
-    setRelocationModel(Reloc::Static);
-
-  // Machine code emitter pass for ARM.
-  PM.add(createARMCodeEmitterPass(*this, MCE));
-  return false;
-}
-
-bool ARMBaseTargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                          CodeGenOpt::Level OptLevel,
                                           JITCodeEmitter &JCE) {
   // FIXME: Move this to TargetJITInfo!
   if (DefRelocModel == Reloc::Default)
@@ -154,40 +142,3 @@ bool ARMBaseTargetMachine::addCodeEmitter(PassManagerBase &PM,
   PM.add(createARMJITCodeEmitterPass(*this, JCE));
   return false;
 }
-
-bool ARMBaseTargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                          CodeGenOpt::Level OptLevel,
-                                          ObjectCodeEmitter &OCE) {
-  // FIXME: Move this to TargetJITInfo!
-  if (DefRelocModel == Reloc::Default)
-    setRelocationModel(Reloc::Static);
-
-  // Machine code emitter pass for ARM.
-  PM.add(createARMObjectCodeEmitterPass(*this, OCE));
-  return false;
-}
-
-bool ARMBaseTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                                CodeGenOpt::Level OptLevel,
-                                                MachineCodeEmitter &MCE) {
-  // Machine code emitter pass for ARM.
-  PM.add(createARMCodeEmitterPass(*this, MCE));
-  return false;
-}
-
-bool ARMBaseTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                                CodeGenOpt::Level OptLevel,
-                                                JITCodeEmitter &JCE) {
-  // Machine code emitter pass for ARM.
-  PM.add(createARMJITCodeEmitterPass(*this, JCE));
-  return false;
-}
-
-bool ARMBaseTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            ObjectCodeEmitter &OCE) {
-  // Machine code emitter pass for ARM.
-  PM.add(createARMObjectCodeEmitterPass(*this, OCE));
-  return false;
-}
-
diff --git a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
index dd9542e..88e67e3 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
+++ b/libclamav/c++/llvm/lib/Target/ARM/ARMTargetMachine.h
@@ -53,20 +53,7 @@ public:
   virtual bool addPreSched2(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addPreEmitPass(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              MachineCodeEmitter &MCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
                               JITCodeEmitter &MCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              ObjectCodeEmitter &OCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    MachineCodeEmitter &MCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    JITCodeEmitter &MCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    ObjectCodeEmitter &OCE);
 };
 
 /// ARMTargetMachine - ARM target machine.
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
index bb6fc2f..0a75c09 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMAsmPrinter.cpp
@@ -43,7 +43,6 @@
 #include "llvm/Target/TargetRegistry.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallString.h"
-#include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringSet.h"
 #include "llvm/Support/CommandLine.h"
@@ -53,8 +52,6 @@
 #include <cctype>
 using namespace llvm;
 
-STATISTIC(EmittedInsts, "Number of machine instrs printed");
-
 static cl::opt<bool>
 EnableMCInst("enable-arm-mcinst-printer", cl::Hidden,
             cl::desc("enable experimental asmprinter gunk in the arm backend"));
@@ -76,8 +73,9 @@ namespace {
 
   public:
     explicit ARMAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
-                           const MCAsmInfo *T, bool V)
-      : AsmPrinter(O, TM, T, V), AFI(NULL), MCP(NULL) {
+                           MCContext &Ctx, MCStreamer &Streamer,
+                           const MCAsmInfo *T)
+      : AsmPrinter(O, TM, Ctx, Streamer, T), AFI(NULL), MCP(NULL) {
       Subtarget = &TM.getSubtarget<ARMSubtarget>();
     }
 
@@ -85,10 +83,6 @@ namespace {
       return "ARM Assembly Printer";
     }
     
-    void printMCInst(const MCInst *MI) {
-      ARMInstPrinter(O, *MAI, VerboseAsm).printInstruction(MI);
-    }
-    
     void printInstructionThroughMCStreamer(const MachineInstr *MI);
     
 
@@ -162,8 +156,11 @@ namespace {
     void printInstruction(const MachineInstr *MI);  // autogenerated.
     static const char *getRegisterName(unsigned RegNo);
 
-    void printMachineInstruction(const MachineInstr *MI);
+    virtual void EmitInstruction(const MachineInstr *MI);
     bool runOnMachineFunction(MachineFunction &F);
+    
+    virtual void EmitConstantPool() {} // we emit constant pools customly!
+    virtual void EmitFunctionEntryLabel();
     void EmitStartOfAsmFile(Module &M);
     void EmitEndOfAsmFile(Module &M);
 
@@ -203,7 +200,7 @@ namespace {
           
           MachineModuleInfoMachO &MMIMachO =
             MMI->getObjFileInfo<MachineModuleInfoMachO>();
-          const MCSymbol *&StubSym =
+          MCSymbol *&StubSym =
             GV->hasHiddenVisibility() ? MMIMachO.getHiddenGVStubEntry(Sym) :
                                         MMIMachO.getGVStubEntry(Sym);
           if (StubSym == 0)
@@ -223,7 +220,7 @@ namespace {
            O << "-.";
          O << ')';
       }
-      O << '\n';
+      OutStreamer.AddBlankLine();
     }
 
     void getAnalysisUsage(AnalysisUsage &AU) const {
@@ -237,95 +234,26 @@ namespace {
 
 #include "ARMGenAsmWriter.inc"
 
-/// runOnMachineFunction - This uses the printInstruction()
-/// method to print assembly for each instruction.
-///
-bool ARMAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
-  AFI = MF.getInfo<ARMFunctionInfo>();
-  MCP = MF.getConstantPool();
-
-  SetupMachineFunction(MF);
-  O << "\n";
-
-  // NOTE: we don't print out constant pools here, they are handled as
-  // instructions.
-
-  O << '\n';
-
-  // Print out labels for the function.
-  const Function *F = MF.getFunction();
-  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
-
-  switch (F->getLinkage()) {
-  default: llvm_unreachable("Unknown linkage type!");
-  case Function::PrivateLinkage:
-  case Function::InternalLinkage:
-    break;
-  case Function::ExternalLinkage:
-    O << "\t.globl\t" << *CurrentFnSym << "\n";
-    break;
-  case Function::LinkerPrivateLinkage:
-  case Function::WeakAnyLinkage:
-  case Function::WeakODRLinkage:
-  case Function::LinkOnceAnyLinkage:
-  case Function::LinkOnceODRLinkage:
-    if (Subtarget->isTargetDarwin()) {
-      O << "\t.globl\t" << *CurrentFnSym << "\n";
-      O << "\t.weak_definition\t" << *CurrentFnSym << "\n";
-    } else {
-      O << MAI->getWeakRefDirective() << *CurrentFnSym << "\n";
-    }
-    break;
-  }
-
-  printVisibility(CurrentFnSym, F->getVisibility());
-
-  unsigned FnAlign = 1 << MF.getAlignment();  // MF alignment is log2.
+void ARMAsmPrinter::EmitFunctionEntryLabel() {
   if (AFI->isThumbFunction()) {
-    EmitAlignment(FnAlign, F, AFI->getAlign());
     O << "\t.code\t16\n";
     O << "\t.thumb_func";
     if (Subtarget->isTargetDarwin())
-      O << "\t" << *CurrentFnSym;
-    O << "\n";
-  } else {
-    EmitAlignment(FnAlign, F);
-  }
-
-  O << *CurrentFnSym << ":\n";
-  // Emit pre-function debug information.
-  DW->BeginFunction(&MF);
-
-  if (Subtarget->isTargetDarwin()) {
-    // If the function is empty, then we need to emit *something*. Otherwise,
-    // the function's label might be associated with something that it wasn't
-    // meant to be associated with. We emit a noop in this situation.
-    MachineFunction::iterator I = MF.begin();
-
-    if (++I == MF.end() && MF.front().empty())
-      O << "\tnop\n";
-  }
-
-  // Print out code for the function.
-  for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
-       I != E; ++I) {
-    // Print a label for the basic block.
-    if (I != MF.begin())
-      EmitBasicBlockStart(I);
-
-    // Print the assembly for the instruction.
-    for (MachineBasicBlock::const_iterator II = I->begin(), E = I->end();
-         II != E; ++II)
-      printMachineInstruction(II);
+      O << '\t' << *CurrentFnSym;
+    O << '\n';
   }
+  
+  OutStreamer.EmitLabel(CurrentFnSym);
+}
 
-  if (MAI->hasDotTypeDotSizeDirective())
-    O << "\t.size " << *CurrentFnSym << ", .-" << *CurrentFnSym << "\n";
-
-  // Emit post-function debug information.
-  DW->EndFunction(&MF);
+/// runOnMachineFunction - This uses the printInstruction()
+/// method to print assembly for each instruction.
+///
+bool ARMAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
+  AFI = MF.getInfo<ARMFunctionInfo>();
+  MCP = MF.getConstantPool();
 
-  return false;
+  return AsmPrinter::runOnMachineFunction(MF);
 }
 
 void ARMAsmPrinter::printOperand(const MachineInstr *MI, int OpNum,
@@ -891,7 +819,7 @@ void ARMAsmPrinter::printCPInstOperand(const MachineInstr *MI, int OpNum,
   // data itself.
   if (!strcmp(Modifier, "label")) {
     unsigned ID = MI->getOperand(OpNum).getImm();
-    O << *GetCPISymbol(ID) << ":\n";
+    OutStreamer.EmitLabel(GetCPISymbol(ID));
   } else {
     assert(!strcmp(Modifier, "cpentry") && "Unknown modifier for CPE");
     unsigned CPI = MI->getOperand(OpNum).getIndex();
@@ -939,14 +867,14 @@ void ARMAsmPrinter::printJTBlockOperand(const MachineInstr *MI, int OpNum) {
   const MachineJumpTableInfo *MJTI = MF->getJumpTableInfo();
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
   const std::vector<MachineBasicBlock*> &JTBBs = JT[JTI].MBBs;
-  bool UseSet= MAI->getSetDirective() && TM.getRelocationModel() == Reloc::PIC_;
+  bool UseSet= MAI->hasSetDirective() && TM.getRelocationModel() == Reloc::PIC_;
   SmallPtrSet<MachineBasicBlock*, 8> JTSets;
   for (unsigned i = 0, e = JTBBs.size(); i != e; ++i) {
     MachineBasicBlock *MBB = JTBBs[i];
     bool isNew = JTSets.insert(MBB);
 
     if (UseSet && isNew) {
-      O << MAI->getSetDirective() << ' '
+      O << "\t.set\t"
         << *GetARMSetPICJumpTableLabel2(JTI, MO2.getImm(), MBB) << ','
         << *MBB->getSymbol(OutContext) << '-' << *JTISymbol << '\n';
     }
@@ -1093,12 +1021,7 @@ bool ARMAsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
   return false;
 }
 
-void ARMAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
-  ++EmittedInsts;
-
-  // Call the autogenerated instruction printer routines.
-  processDebugLoc(MI, true);
-  
+void ARMAsmPrinter::EmitInstruction(const MachineInstr *MI) {
   if (EnableMCInst) {
     printInstructionThroughMCStreamer(MI);
   } else {
@@ -1107,12 +1030,8 @@ void ARMAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
       EmitAlignment(2);
     
     printInstruction(MI);
+    OutStreamer.AddBlankLine();
   }
-  
-  if (VerboseAsm)
-    EmitComments(*MI);
-  O << '\n';
-  processDebugLoc(MI, false);
 }
 
 void ARMAsmPrinter::EmitStartOfAsmFile(Module &M) {
@@ -1232,20 +1151,6 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
   case ARM::t2MOVi32imm:
     assert(0 && "Should be lowered by thumb2it pass");
   default: break;
-  case TargetInstrInfo::DBG_LABEL:
-  case TargetInstrInfo::EH_LABEL:
-  case TargetInstrInfo::GC_LABEL:
-    printLabel(MI);
-    return;
-  case TargetInstrInfo::KILL:
-    printKill(MI);
-    return;
-  case TargetInstrInfo::INLINEASM:
-    printInlineAsm(MI);
-    return;
-  case TargetInstrInfo::IMPLICIT_DEF:
-    printImplicitDef(MI);
-    return;
   case ARM::PICADD: { // FIXME: Remove asm string from td file.
     // This is a pseudo op for a label + instruction sequence, which looks like:
     // LPC0:
@@ -1267,7 +1172,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     AddInst.addOperand(MCOperand::CreateReg(MI->getOperand(0).getReg()));
     AddInst.addOperand(MCOperand::CreateReg(ARM::PC));
     AddInst.addOperand(MCOperand::CreateReg(MI->getOperand(1).getReg()));
-    printMCInst(&AddInst);
+    OutStreamer.EmitInstruction(AddInst);
     return;
   }
   case ARM::CONSTPOOL_ENTRY: { // FIXME: Remove asm string from td file.
@@ -1308,8 +1213,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
       TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
 
       TmpInst.addOperand(MCOperand::CreateReg(0));          // cc_out
-      printMCInst(&TmpInst);
-      O << '\n';
+      OutStreamer.EmitInstruction(TmpInst);
     }
 
     {
@@ -1323,7 +1227,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
       TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
       
       TmpInst.addOperand(MCOperand::CreateReg(0));          // cc_out
-      printMCInst(&TmpInst);
+      OutStreamer.EmitInstruction(TmpInst);
     }
     return; 
   }
@@ -1342,8 +1246,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
       TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
       TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
       
-      printMCInst(&TmpInst);
-      O << '\n';
+      OutStreamer.EmitInstruction(TmpInst);
     }
     
     {
@@ -1357,7 +1260,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
       TmpInst.addOperand(MCOperand::CreateImm(MI->getOperand(2).getImm()));
       TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(3).getReg()));
       
-      printMCInst(&TmpInst);
+      OutStreamer.EmitInstruction(TmpInst);
     }
     
     return;
@@ -1366,8 +1269,7 @@ void ARMAsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
       
   MCInst TmpInst;
   MCInstLowering.Lower(MI, TmpInst);
-  
-  printMCInst(&TmpInst);
+  OutStreamer.EmitInstruction(TmpInst);
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
index 97aa351..d7d8e09 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/AsmPrinter/ARMInstPrinter.cpp
@@ -24,7 +24,6 @@ using namespace llvm;
 // Include the auto-generated portion of the assembly writer.
 #define MachineInstr MCInst
 #define ARMAsmPrinter ARMInstPrinter  // FIXME: REMOVE.
-#define NO_ASM_WRITER_BOILERPLATE
 #include "ARMGenAsmWriter.inc"
 #undef MachineInstr
 #undef ARMAsmPrinter
diff --git a/libclamav/c++/llvm/lib/Target/ARM/README.txt b/libclamav/c++/llvm/lib/Target/ARM/README.txt
index a6f26a5..9efb5a1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/README.txt
+++ b/libclamav/c++/llvm/lib/Target/ARM/README.txt
@@ -71,26 +71,6 @@ were disabled due to badness with the ARM carry flag on subtracts.
 
 //===---------------------------------------------------------------------===//
 
-We currently compile abs:
-int foo(int p) { return p < 0 ? -p : p; }
-
-into:
-
-_foo:
-        rsb r1, r0, #0
-        cmn r0, #1
-        movgt r1, r0
-        mov r0, r1
-        bx lr
-
-This is very, uh, literal.  This could be a 3 operation sequence:
-  t = (p sra 31); 
-  res = (p xor t)-t
-
-Which would be better.  This occurs in png decode.
-
-//===---------------------------------------------------------------------===//
-
 More load / store optimizations:
 1) Better representation for block transfer? This is from Olden/power:
 
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
index 387edaf..20f13f1 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2InstrInfo.cpp
@@ -382,8 +382,8 @@ bool llvm::rewriteT2FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
     MI.getOperand(FrameRegIdx+1).ChangeToImmediate(ThisImmVal);
   } else {
 
-    // AddrMode4 cannot handle any offset.
-    if (AddrMode == ARMII::AddrMode4)
+    // AddrMode4 and AddrMode6 cannot handle any offset.
+    if (AddrMode == ARMII::AddrMode4 || AddrMode == ARMII::AddrMode6)
       return false;
 
     // AddrModeT2_so cannot handle any offset. If there is no offset
@@ -418,15 +418,12 @@ bool llvm::rewriteT2FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
         NewOpc = positiveOffsetOpcode(Opcode);
         NumBits = 12;
       }
-    } else {
-      // VFP and NEON address modes.
-      int InstrOffs = 0;
-      if (AddrMode == ARMII::AddrMode5) {
-        const MachineOperand &OffOp = MI.getOperand(FrameRegIdx+1);
-        InstrOffs = ARM_AM::getAM5Offset(OffOp.getImm());
-        if (ARM_AM::getAM5Op(OffOp.getImm()) == ARM_AM::sub)
-          InstrOffs *= -1;
-      }
+    } else if (AddrMode == ARMII::AddrMode5) {
+      // VFP address mode.
+      const MachineOperand &OffOp = MI.getOperand(FrameRegIdx+1);
+      int InstrOffs = ARM_AM::getAM5Offset(OffOp.getImm());
+      if (ARM_AM::getAM5Op(OffOp.getImm()) == ARM_AM::sub)
+        InstrOffs *= -1;
       NumBits = 8;
       Scale = 4;
       Offset += InstrOffs * 4;
@@ -435,6 +432,8 @@ bool llvm::rewriteT2FrameIndex(MachineInstr &MI, unsigned FrameRegIdx,
         Offset = -Offset;
         isSub = true;
       }
+    } else {
+      llvm_unreachable("Unsupported addressing mode!");
     }
 
     if (NewOpc != Opcode)
diff --git a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
index 95288bf..5086eff 100644
--- a/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
+++ b/libclamav/c++/llvm/lib/Target/ARM/Thumb2SizeReduction.cpp
@@ -83,7 +83,7 @@ namespace {
     // FIXME: Do we need the 16-bit 'S' variant?
     { ARM::t2MOVr,ARM::tMOVgpr2gpr,0,            0,   0,    0,   0,  1,0, 0 },
     { ARM::t2MOVCCr,0,            ARM::tMOVCCr,  0,   0,    0,   0,  0,1, 0 },
-    { ARM::t2MOVCCi,0,            ARM::tMOVCCi,  0,   8,    0,   0,  0,1, 0 },
+    { ARM::t2MOVCCi,0,            ARM::tMOVCCi,  0,   8,    0,   1,  0,1, 0 },
     { ARM::t2MUL,   0,            ARM::tMUL,     0,   0,    0,   1,  0,0, 0 },
     { ARM::t2MVNr,  ARM::tMVN,    0,             0,   0,    1,   0,  0,0, 0 },
     { ARM::t2ORRrr, 0,            ARM::tORR,     0,   0,    0,   1,  0,0, 0 },
diff --git a/libclamav/c++/llvm/lib/Target/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/CMakeLists.txt
index 10478b4..43ebdac 100644
--- a/libclamav/c++/llvm/lib/Target/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/CMakeLists.txt
@@ -9,7 +9,6 @@ add_llvm_library(LLVMTarget
   TargetInstrInfo.cpp
   TargetIntrinsicInfo.cpp
   TargetLoweringObjectFile.cpp
-  TargetMachOWriterInfo.cpp
   TargetMachine.cpp
   TargetRegisterInfo.cpp
   TargetSubtarget.cpp
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
index 922ea1a..afc90b1 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/AsmPrinter/PPCAsmPrinter.cpp
@@ -47,14 +47,11 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/FormattedStream.h"
-#include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringSet.h"
 #include "llvm/ADT/SmallString.h"
 using namespace llvm;
 
-STATISTIC(EmittedInsts, "Number of machine instrs printed");
-
 namespace {
   class PPCAsmPrinter : public AsmPrinter {
   protected:
@@ -63,8 +60,9 @@ namespace {
     uint64_t LabelID;
   public:
     explicit PPCAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
-                           const MCAsmInfo *T, bool V)
-      : AsmPrinter(O, TM, T, V),
+                           MCContext &Ctx, MCStreamer &Streamer,
+                           const MCAsmInfo *T)
+      : AsmPrinter(O, TM, Ctx, Streamer, T),
         Subtarget(TM.getSubtarget<PPCSubtarget>()), LabelID(0) {}
 
     virtual const char *getPassName() const {
@@ -98,7 +96,7 @@ namespace {
     static const char *getRegisterName(unsigned RegNo);
 
 
-    void printMachineInstruction(const MachineInstr *MI);
+    virtual void EmitInstruction(const MachineInstr *MI);
     void printOp(const MachineOperand &MO);
 
     /// stripRegisterPrefix - This method strips the character prefix from a
@@ -200,7 +198,7 @@ namespace {
           if (GV->isDeclaration() || GV->isWeakForLinker()) {
             // Dynamically-resolved functions need a stub for the function.
             MCSymbol *Sym = GetSymbolWithGlobalValueBase(GV, "$stub");
-            const MCSymbol *&StubSym =
+            MCSymbol *&StubSym =
               MMI->getObjFileInfo<MachineModuleInfoMachO>().getFnStubEntry(Sym);
             if (StubSym == 0)
               StubSym = GetGlobalValueSymbol(GV);
@@ -213,8 +211,8 @@ namespace {
           TempNameStr += StringRef(MO.getSymbolName());
           TempNameStr += StringRef("$stub");
           
-          const MCSymbol *Sym = GetExternalSymbolSymbol(TempNameStr.str());
-          const MCSymbol *&StubSym =
+          MCSymbol *Sym = GetExternalSymbolSymbol(TempNameStr.str());
+          MCSymbol *&StubSym =
             MMI->getObjFileInfo<MachineModuleInfoMachO>().getFnStubEntry(Sym);
           if (StubSym == 0)
             StubSym = GetExternalSymbolSymbol(MO.getSymbolName());
@@ -319,24 +317,24 @@ namespace {
 
     void printPredicateOperand(const MachineInstr *MI, unsigned OpNo,
                                const char *Modifier);
-
-    virtual bool runOnMachineFunction(MachineFunction &F) = 0;
   };
 
   /// PPCLinuxAsmPrinter - PowerPC assembly printer, customized for Linux
   class PPCLinuxAsmPrinter : public PPCAsmPrinter {
   public:
     explicit PPCLinuxAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
-                                const MCAsmInfo *T, bool V)
-      : PPCAsmPrinter(O, TM, T, V){}
+                                MCContext &Ctx, MCStreamer &Streamer,
+                                const MCAsmInfo *T)
+      : PPCAsmPrinter(O, TM, Ctx, Streamer, T) {}
 
     virtual const char *getPassName() const {
       return "Linux PPC Assembly Printer";
     }
 
-    bool runOnMachineFunction(MachineFunction &F);
     bool doFinalization(Module &M);
 
+    virtual void EmitFunctionEntryLabel();
+
     void getAnalysisUsage(AnalysisUsage &AU) const {
       AU.setPreservesAll();
       AU.addRequired<MachineModuleInfo>();
@@ -351,14 +349,14 @@ namespace {
     formatted_raw_ostream &OS;
   public:
     explicit PPCDarwinAsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
-                                 const MCAsmInfo *T, bool V)
-      : PPCAsmPrinter(O, TM, T, V), OS(O) {}
+                                 MCContext &Ctx, MCStreamer &Streamer,
+                                 const MCAsmInfo *T)
+      : PPCAsmPrinter(O, TM, Ctx, Streamer, T), OS(O) {}
 
     virtual const char *getPassName() const {
       return "Darwin PPC Assembly Printer";
     }
 
-    bool runOnMachineFunction(MachineFunction &F);
     bool doFinalization(Module &M);
     void EmitStartOfAsmFile(Module &M);
 
@@ -403,10 +401,10 @@ void PPCAsmPrinter::printOp(const MachineOperand &MO) {
       return;
     }
 
-    const MCSymbol *NLPSym = 
+    MCSymbol *NLPSym = 
       OutContext.GetOrCreateSymbol(StringRef(MAI->getGlobalPrefix())+
                                    MO.getSymbolName()+"$non_lazy_ptr");
-    const MCSymbol *&StubSym = 
+    MCSymbol *&StubSym = 
       MMI->getObjFileInfo<MachineModuleInfoMachO>().getGVStubEntry(NLPSym);
     if (StubSym == 0)
       StubSym = GetExternalSymbolSymbol(MO.getSymbolName());
@@ -424,7 +422,7 @@ void PPCAsmPrinter::printOp(const MachineOperand &MO) {
         (GV->isDeclaration() || GV->isWeakForLinker())) {
       if (!GV->hasHiddenVisibility()) {
         SymToPrint = GetSymbolWithGlobalValueBase(GV, "$non_lazy_ptr");
-        const MCSymbol *&StubSym = 
+        MCSymbol *&StubSym = 
        MMI->getObjFileInfo<MachineModuleInfoMachO>().getGVStubEntry(SymToPrint);
         if (StubSym == 0)
           StubSym = GetGlobalValueSymbol(GV);
@@ -432,7 +430,7 @@ void PPCAsmPrinter::printOp(const MachineOperand &MO) {
                  GV->hasAvailableExternallyLinkage()) {
         SymToPrint = GetSymbolWithGlobalValueBase(GV, "$non_lazy_ptr");
         
-        const MCSymbol *&StubSym = 
+        MCSymbol *&StubSym = 
           MMI->getObjFileInfo<MachineModuleInfoMachO>().
                     getHiddenGVStubEntry(SymToPrint);
         if (StubSym == 0)
@@ -535,20 +533,16 @@ void PPCAsmPrinter::printPredicateOperand(const MachineInstr *MI, unsigned OpNo,
 }
 
 
-/// printMachineInstruction -- Print out a single PowerPC MI in Darwin syntax to
+/// EmitInstruction -- Print out a single PowerPC MI in Darwin syntax to
 /// the current output stream.
 ///
-void PPCAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
-  ++EmittedInsts;
-  
-  processDebugLoc(MI, true);
-
+void PPCAsmPrinter::EmitInstruction(const MachineInstr *MI) {
   // Check for slwi/srwi mnemonics.
-  bool useSubstituteMnemonic = false;
   if (MI->getOpcode() == PPC::RLWINM) {
     unsigned char SH = MI->getOperand(2).getImm();
     unsigned char MB = MI->getOperand(3).getImm();
     unsigned char ME = MI->getOperand(4).getImm();
+    bool useSubstituteMnemonic = false;
     if (SH <= 31 && MB == 0 && ME == (31-SH)) {
       O << "\tslwi "; useSubstituteMnemonic = true;
     }
@@ -561,120 +555,55 @@ void PPCAsmPrinter::printMachineInstruction(const MachineInstr *MI) {
       O << ", ";
       printOperand(MI, 1);
       O << ", " << (unsigned int)SH;
+      OutStreamer.AddBlankLine();
+      return;
     }
-  } else if (MI->getOpcode() == PPC::OR || MI->getOpcode() == PPC::OR8) {
-    if (MI->getOperand(1).getReg() == MI->getOperand(2).getReg()) {
-      useSubstituteMnemonic = true;
-      O << "\tmr ";
-      printOperand(MI, 0);
-      O << ", ";
-      printOperand(MI, 1);
-    }
-  } else if (MI->getOpcode() == PPC::RLDICR) {
+  }
+  
+  if ((MI->getOpcode() == PPC::OR || MI->getOpcode() == PPC::OR8) &&
+      MI->getOperand(1).getReg() == MI->getOperand(2).getReg()) {
+    O << "\tmr ";
+    printOperand(MI, 0);
+    O << ", ";
+    printOperand(MI, 1);
+    OutStreamer.AddBlankLine();
+    return;
+  }
+  
+  if (MI->getOpcode() == PPC::RLDICR) {
     unsigned char SH = MI->getOperand(2).getImm();
     unsigned char ME = MI->getOperand(3).getImm();
     // rldicr RA, RS, SH, 63-SH == sldi RA, RS, SH
     if (63-SH == ME) {
-      useSubstituteMnemonic = true;
       O << "\tsldi ";
       printOperand(MI, 0);
       O << ", ";
       printOperand(MI, 1);
       O << ", " << (unsigned int)SH;
+      OutStreamer.AddBlankLine();
+      return;
     }
   }
 
-  if (!useSubstituteMnemonic)
-    printInstruction(MI);
-
-  if (VerboseAsm)
-    EmitComments(*MI);
-  O << '\n';
-
-  processDebugLoc(MI, false);
+  printInstruction(MI);
+  OutStreamer.AddBlankLine();
 }
 
-/// runOnMachineFunction - This uses the printMachineInstruction()
-/// method to print assembly for each instruction.
-///
-bool PPCLinuxAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
-  SetupMachineFunction(MF);
-  O << "\n\n";
-
-  // Print out constants referenced by the function
-  EmitConstantPool(MF.getConstantPool());
-
-  // Print out labels for the function.
-  const Function *F = MF.getFunction();
-  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
-
-  switch (F->getLinkage()) {
-  default: llvm_unreachable("Unknown linkage type!");
-  case Function::PrivateLinkage:
-  case Function::InternalLinkage:  // Symbols default to internal.
-    break;
-  case Function::ExternalLinkage:
-    O << "\t.global\t" << *CurrentFnSym << '\n' << "\t.type\t";
-    O << *CurrentFnSym << ", @function\n";
-    break;
-  case Function::LinkerPrivateLinkage:
-  case Function::WeakAnyLinkage:
-  case Function::WeakODRLinkage:
-  case Function::LinkOnceAnyLinkage:
-  case Function::LinkOnceODRLinkage:
-    O << "\t.global\t" << *CurrentFnSym << '\n';
-    O << "\t.weak\t" << *CurrentFnSym << '\n';
-    break;
-  }
-
-  printVisibility(CurrentFnSym, F->getVisibility());
-
-  EmitAlignment(MF.getAlignment(), F);
-
-  if (Subtarget.isPPC64()) {
-    // Emit an official procedure descriptor.
-    // FIXME 64-bit SVR4: Use MCSection here!
-    O << "\t.section\t\".opd\",\"aw\"\n";
-    O << "\t.align 3\n";
-    O << *CurrentFnSym << ":\n";
-    O << "\t.quad .L." << *CurrentFnSym << ",.TOC. at tocbase\n";
-    O << "\t.previous\n";
-    O << ".L." << *CurrentFnSym << ":\n";
-  } else {
-    O << *CurrentFnSym << ":\n";
-  }
-
-  // Emit pre-function debug information.
-  DW->BeginFunction(&MF);
-
-  // Print out code for the function.
-  for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
-       I != E; ++I) {
-    // Print a label for the basic block.
-    if (I != MF.begin()) {
-      EmitBasicBlockStart(I);
-    }
-    for (MachineBasicBlock::const_iterator II = I->begin(), E = I->end();
-         II != E; ++II) {
-      // Print the assembly for the instruction.
-      printMachineInstruction(II);
-    }
-  }
-
-  O << "\t.size\t" << *CurrentFnSym << ",.-" << *CurrentFnSym << '\n';
-
-  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
-
-  // Emit post-function debug information.
-  DW->EndFunction(&MF);
-
-  // Print out jump tables referenced by the function.
-  EmitJumpTableInfo(MF);
-
-  // We didn't modify anything.
-  return false;
+void PPCLinuxAsmPrinter::EmitFunctionEntryLabel() {
+  if (!Subtarget.isPPC64())  // linux/ppc32 - Normal entry label.
+    return AsmPrinter::EmitFunctionEntryLabel();
+    
+  // Emit an official procedure descriptor.
+  // FIXME 64-bit SVR4: Use MCSection here!
+  O << "\t.section\t\".opd\",\"aw\"\n";
+  O << "\t.align 3\n";
+  OutStreamer.EmitLabel(CurrentFnSym);
+  O << "\t.quad .L." << *CurrentFnSym << ",.TOC. at tocbase\n";
+  O << "\t.previous\n";
+  O << ".L." << *CurrentFnSym << ":\n";
 }
 
+
 bool PPCLinuxAsmPrinter::doFinalization(Module &M) {
   const TargetData *TD = TM.getTargetData();
 
@@ -695,79 +624,6 @@ bool PPCLinuxAsmPrinter::doFinalization(Module &M) {
   return AsmPrinter::doFinalization(M);
 }
 
-/// runOnMachineFunction - This uses the printMachineInstruction()
-/// method to print assembly for each instruction.
-///
-bool PPCDarwinAsmPrinter::runOnMachineFunction(MachineFunction &MF) {
-  SetupMachineFunction(MF);
-  O << "\n\n";
-
-  // Print out constants referenced by the function
-  EmitConstantPool(MF.getConstantPool());
-
-  // Print out labels for the function.
-  const Function *F = MF.getFunction();
-  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
-
-  switch (F->getLinkage()) {
-  default: llvm_unreachable("Unknown linkage type!");
-  case Function::PrivateLinkage:
-  case Function::InternalLinkage:  // Symbols default to internal.
-    break;
-  case Function::ExternalLinkage:
-    O << "\t.globl\t" << *CurrentFnSym << '\n';
-    break;
-  case Function::WeakAnyLinkage:
-  case Function::WeakODRLinkage:
-  case Function::LinkOnceAnyLinkage:
-  case Function::LinkOnceODRLinkage:
-  case Function::LinkerPrivateLinkage:
-    O << "\t.globl\t" << *CurrentFnSym << '\n';
-    O << "\t.weak_definition\t" << *CurrentFnSym << '\n';
-    break;
-  }
-
-  printVisibility(CurrentFnSym, F->getVisibility());
-
-  EmitAlignment(MF.getAlignment(), F);
-  O << *CurrentFnSym << ":\n";
-
-  // Emit pre-function debug information.
-  DW->BeginFunction(&MF);
-
-  // If the function is empty, then we need to emit *something*. Otherwise, the
-  // function's label might be associated with something that it wasn't meant to
-  // be associated with. We emit a noop in this situation.
-  MachineFunction::iterator I = MF.begin();
-
-  if (++I == MF.end() && MF.front().empty())
-    O << "\tnop\n";
-
-  // Print out code for the function.
-  for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
-       I != E; ++I) {
-    // Print a label for the basic block.
-    if (I != MF.begin()) {
-      EmitBasicBlockStart(I);
-    }
-    for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
-         II != IE; ++II) {
-      // Print the assembly for the instruction.
-      printMachineInstruction(II);
-    }
-  }
-
-  // Emit post-function debug information.
-  DW->EndFunction(&MF);
-
-  // Print out jump tables referenced by the function.
-  EmitJumpTableInfo(MF);
-
-  // We didn't modify anything.
-  return false;
-}
-
-
 void PPCDarwinAsmPrinter::EmitStartOfAsmFile(Module &M) {
   static const char *const CPUDirectives[] = {
     "",
@@ -924,9 +780,8 @@ bool PPCDarwinAsmPrinter::doFinalization(Module &M) {
     for (std::vector<Function *>::const_iterator I = Personalities.begin(),
          E = Personalities.end(); I != E; ++I) {
       if (*I) {
-        const MCSymbol *NLPSym = 
-          GetSymbolWithGlobalValueBase(*I, "$non_lazy_ptr");
-        const MCSymbol *&StubSym = MMIMacho.getGVStubEntry(NLPSym);
+        MCSymbol *NLPSym = GetSymbolWithGlobalValueBase(*I, "$non_lazy_ptr");
+        MCSymbol *&StubSym = MMIMacho.getGVStubEntry(NLPSym);
         StubSym = GetGlobalValueSymbol(*I);
       }
     }
@@ -977,13 +832,13 @@ bool PPCDarwinAsmPrinter::doFinalization(Module &M) {
 ///
 static AsmPrinter *createPPCAsmPrinterPass(formatted_raw_ostream &o,
                                            TargetMachine &tm,
-                                           const MCAsmInfo *tai,
-                                           bool verbose) {
+                                           MCContext &Ctx, MCStreamer &Streamer,
+                                           const MCAsmInfo *tai) {
   const PPCSubtarget *Subtarget = &tm.getSubtarget<PPCSubtarget>();
 
   if (Subtarget->isDarwin())
-    return new PPCDarwinAsmPrinter(o, tm, tai, verbose);
-  return new PPCLinuxAsmPrinter(o, tm, tai, verbose);
+    return new PPCDarwinAsmPrinter(o, tm, Ctx, Streamer, tai);
+  return new PPCLinuxAsmPrinter(o, tm, Ctx, Streamer, tai);
 }
 
 // Force static initialization.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/PowerPC/CMakeLists.txt
index bdd6d36..c997c5c 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/CMakeLists.txt
@@ -19,7 +19,6 @@ add_llvm_target(PowerPCCodeGen
   PPCISelDAGToDAG.cpp
   PPCISelLowering.cpp
   PPCJITInfo.cpp
-  PPCMachOWriterInfo.cpp
   PPCMCAsmInfo.cpp
   PPCPredicates.cpp
   PPCRegisterInfo.cpp
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPC.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPC.h
index 7b98268..67e3a4a 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPC.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPC.h
@@ -23,18 +23,12 @@
 namespace llvm {
   class PPCTargetMachine;
   class FunctionPass;
-  class MachineCodeEmitter;
-  class ObjectCodeEmitter;
   class formatted_raw_ostream;
   
 FunctionPass *createPPCBranchSelectionPass();
 FunctionPass *createPPCISelDag(PPCTargetMachine &TM);
-FunctionPass *createPPCCodeEmitterPass(PPCTargetMachine &TM,
-                                       MachineCodeEmitter &MCE);
 FunctionPass *createPPCJITCodeEmitterPass(PPCTargetMachine &TM,
                                           JITCodeEmitter &MCE);
-FunctionPass *createPPCObjectCodeEmitterPass(PPCTargetMachine &TM,
-                                             ObjectCodeEmitter &OCE);
 
 extern Target ThePPC32Target;
 extern Target ThePPC64Target;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
index da9ea36..327470d 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCCodeEmitter.cpp
@@ -17,26 +17,34 @@
 #include "PPC.h"
 #include "llvm/Module.h"
 #include "llvm/PassManager.h"
-#include "llvm/CodeGen/MachineCodeEmitter.h"
 #include "llvm/CodeGen/JITCodeEmitter.h"
-#include "llvm/CodeGen/ObjectCodeEmitter.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineInstrBuilder.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
-#include "llvm/CodeGen/Passes.h"
-#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetOptions.h"
 using namespace llvm;
 
 namespace {
-  class PPCCodeEmitter {
+  class PPCCodeEmitter : public MachineFunctionPass {
     TargetMachine &TM;
-    MachineCodeEmitter &MCE;
+    JITCodeEmitter &MCE;
+    
+    void getAnalysisUsage(AnalysisUsage &AU) const {
+      AU.addRequired<MachineModuleInfo>();
+      MachineFunctionPass::getAnalysisUsage(AU);
+    }
+    
+    static char ID;
+    
+    /// MovePCtoLROffset - When/if we see a MovePCtoLR instruction, we record
+    /// its address in the function into this pointer.
+    void *MovePCtoLROffset;
   public:
-    PPCCodeEmitter(TargetMachine &tm, MachineCodeEmitter &mce):
-        TM(tm), MCE(mce) {}
+    
+    PPCCodeEmitter(TargetMachine &tm, JITCodeEmitter &mce)
+      : MachineFunctionPass(&ID), TM(tm), MCE(mce) {}
 
     /// getBinaryCodeForInstr - This function, generated by the
     /// CodeEmitterGenerator using TableGen, produces the binary encoding for
@@ -49,27 +57,6 @@ namespace {
     unsigned getMachineOpValue(const MachineInstr &MI,
                                const MachineOperand &MO);
 
-    /// MovePCtoLROffset - When/if we see a MovePCtoLR instruction, we record
-    /// its address in the function into this pointer.
-
-    void *MovePCtoLROffset;
-  };
-
-  template <class CodeEmitter>
-  class Emitter : public MachineFunctionPass, public PPCCodeEmitter {
-    TargetMachine &TM;
-    CodeEmitter &MCE;
-
-    void getAnalysisUsage(AnalysisUsage &AU) const {
-      AU.addRequired<MachineModuleInfo>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-
-  public:
-    static char ID;
-    Emitter(TargetMachine &tm, CodeEmitter &mce)
-      : MachineFunctionPass(&ID), PPCCodeEmitter(tm, mce), TM(tm), MCE(mce) {}
-
     const char *getPassName() const { return "PowerPC Machine Code Emitter"; }
 
     /// runOnMachineFunction - emits the given MachineFunction to memory
@@ -84,31 +71,18 @@ namespace {
     ///
     unsigned getValueBit(int64_t Val, unsigned bit) { return (Val >> bit) & 1; }
   };
-
-  template <class CodeEmitter>
-    char Emitter<CodeEmitter>::ID = 0;
 }
 
+char PPCCodeEmitter::ID = 0;
+
 /// createPPCCodeEmitterPass - Return a pass that emits the collected PPC code
 /// to the specified MCE object.
-
-FunctionPass *llvm::createPPCCodeEmitterPass(PPCTargetMachine &TM,
-                                             MachineCodeEmitter &MCE) {
-  return new Emitter<MachineCodeEmitter>(TM, MCE);
-}
-
 FunctionPass *llvm::createPPCJITCodeEmitterPass(PPCTargetMachine &TM,
                                                 JITCodeEmitter &JCE) {
-  return new Emitter<JITCodeEmitter>(TM, JCE);
-}
-
-FunctionPass *llvm::createPPCObjectCodeEmitterPass(PPCTargetMachine &TM,
-                                                   ObjectCodeEmitter &OCE) {
-  return new Emitter<ObjectCodeEmitter>(TM, OCE);
+  return new PPCCodeEmitter(TM, JCE);
 }
 
-template <class CodeEmitter>
-bool Emitter<CodeEmitter>::runOnMachineFunction(MachineFunction &MF) {
+bool PPCCodeEmitter::runOnMachineFunction(MachineFunction &MF) {
   assert((MF.getTarget().getRelocationModel() != Reloc::Default ||
           MF.getTarget().getRelocationModel() != Reloc::Static) &&
          "JIT relocation model must be set to static or default!");
@@ -124,8 +98,7 @@ bool Emitter<CodeEmitter>::runOnMachineFunction(MachineFunction &MF) {
   return false;
 }
 
-template <class CodeEmitter>
-void Emitter<CodeEmitter>::emitBasicBlock(MachineBasicBlock &MBB) {
+void PPCCodeEmitter::emitBasicBlock(MachineBasicBlock &MBB) {
   MCE.StartMachineBasicBlock(&MBB);
 
   for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end(); I != E; ++I){
@@ -135,12 +108,12 @@ void Emitter<CodeEmitter>::emitBasicBlock(MachineBasicBlock &MBB) {
     default:
       MCE.emitWordBE(getBinaryCodeForInstr(MI));
       break;
-    case TargetInstrInfo::DBG_LABEL:
-    case TargetInstrInfo::EH_LABEL:
+    case TargetOpcode::DBG_LABEL:
+    case TargetOpcode::EH_LABEL:
       MCE.emitLabel(MI.getOperand(0).getImm());
       break;
-    case TargetInstrInfo::IMPLICIT_DEF:
-    case TargetInstrInfo::KILL:
+    case TargetOpcode::IMPLICIT_DEF:
+    case TargetOpcode::KILL:
       break; // pseudo opcode, no side effects
     case PPC::MovePCtoLR:
     case PPC::MovePCtoLR8:
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCHazardRecognizers.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCHazardRecognizers.cpp
index 6af7e0f..3a15f7e 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCHazardRecognizers.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCHazardRecognizers.cpp
@@ -118,7 +118,7 @@ isLoadOfStoredAddress(unsigned LoadSize, SDValue Ptr1, SDValue Ptr2) const {
 }
 
 /// getHazardType - We return hazard for any non-branch instruction that would
-/// terminate terminate the dispatch group.  We turn NoopHazard for any
+/// terminate the dispatch group.  We turn NoopHazard for any
 /// instructions that wouldn't terminate the dispatch group that would cause a
 /// pipeline flush.
 ScheduleHazardRecognizer::HazardType PPCHazardRecognizer970::
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
index 32c1879..004997f 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelDAGToDAG.cpp
@@ -199,7 +199,7 @@ void PPCDAGToDAGISel::InsertVRSaveCode(MachineFunction &Fn) {
   // Check to see if this function uses vector registers, which means we have to
   // save and restore the VRSAVE register and update it with the regs we use.  
   //
-  // In this case, there will be virtual registers of vector type type created
+  // In this case, there will be virtual registers of vector type created
   // by the scheduler.  Detect them now.
   bool HasVectorVReg = false;
   for (unsigned i = TargetRegisterInfo::FirstVirtualRegister, 
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
index 8248c94..a11d624 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.cpp
@@ -1572,7 +1572,7 @@ PPCTargetLowering::LowerFormalArguments_SVR4(
 
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
   // Potential tail calls could cause overwriting of argument stack slots.
-  bool isImmutable = !(PerformTailCallOpt && (CallConv==CallingConv::Fast));
+  bool isImmutable = !(GuaranteedTailCallOpt && (CallConv==CallingConv::Fast));
   unsigned PtrByteSize = 4;
 
   // Assign locations to all of the incoming arguments.
@@ -1773,7 +1773,7 @@ PPCTargetLowering::LowerFormalArguments_Darwin(
   EVT PtrVT = DAG.getTargetLoweringInfo().getPointerTy();
   bool isPPC64 = PtrVT == MVT::i64;
   // Potential tail calls could cause overwriting of argument stack slots.
-  bool isImmutable = !(PerformTailCallOpt && (CallConv==CallingConv::Fast));
+  bool isImmutable = !(GuaranteedTailCallOpt && (CallConv==CallingConv::Fast));
   unsigned PtrByteSize = isPPC64 ? 8 : 4;
 
   unsigned ArgOffset = PPCFrameInfo::getLinkageSize(isPPC64, true);
@@ -2164,7 +2164,7 @@ CalculateParameterAndLinkageAreaSize(SelectionDAG &DAG,
                       PPCFrameInfo::getMinCallFrameSize(isPPC64, true));
 
   // Tail call needs the stack to be aligned.
-  if (CC==CallingConv::Fast && PerformTailCallOpt) {
+  if (CC==CallingConv::Fast && GuaranteedTailCallOpt) {
     unsigned TargetAlign = DAG.getMachineFunction().getTarget().getFrameInfo()->
       getStackAlignment();
     unsigned AlignMask = TargetAlign-1;
@@ -2200,6 +2200,9 @@ PPCTargetLowering::IsEligibleForTailCallOptimization(SDValue Callee,
                                                      bool isVarArg,
                                       const SmallVectorImpl<ISD::InputArg> &Ins,
                                                      SelectionDAG& DAG) const {
+  if (!GuaranteedTailCallOpt)
+    return false;
+
   // Variable argument functions are not supported.
   if (isVarArg)
     return false;
@@ -2601,7 +2604,7 @@ PPCTargetLowering::FinishCall(CallingConv::ID CallConv, DebugLoc dl,
   // the stack. Account for this here so these bytes can be pushed back on in
   // PPCRegisterInfo::eliminateCallFramePseudoInstr.
   int BytesCalleePops =
-    (CallConv==CallingConv::Fast && PerformTailCallOpt) ? NumBytes : 0;
+    (CallConv==CallingConv::Fast && GuaranteedTailCallOpt) ? NumBytes : 0;
 
   if (InFlag.getNode())
     Ops.push_back(InFlag);
@@ -2673,11 +2676,15 @@ PPCTargetLowering::FinishCall(CallingConv::ID CallConv, DebugLoc dl,
 SDValue
 PPCTargetLowering::LowerCall(SDValue Chain, SDValue Callee,
                              CallingConv::ID CallConv, bool isVarArg,
-                             bool isTailCall,
+                             bool &isTailCall,
                              const SmallVectorImpl<ISD::OutputArg> &Outs,
                              const SmallVectorImpl<ISD::InputArg> &Ins,
                              DebugLoc dl, SelectionDAG &DAG,
                              SmallVectorImpl<SDValue> &InVals) {
+  if (isTailCall)
+    isTailCall = IsEligibleForTailCallOptimization(Callee, CallConv, isVarArg,
+                                                   Ins, DAG);
+
   if (PPCSubTarget.isSVR4ABI() && !PPCSubTarget.isPPC64()) {
     return LowerCall_SVR4(Chain, Callee, CallConv, isVarArg,
                           isTailCall, Outs, Ins,
@@ -2700,10 +2707,6 @@ PPCTargetLowering::LowerCall_SVR4(SDValue Chain, SDValue Callee,
   // See PPCTargetLowering::LowerFormalArguments_SVR4() for a description
   // of the 32-bit SVR4 ABI stack frame layout.
 
-  assert((!isTailCall ||
-          (CallConv == CallingConv::Fast && PerformTailCallOpt)) &&
-         "IsEligibleForTailCallOptimization missed a case!");
-
   assert((CallConv == CallingConv::C ||
           CallConv == CallingConv::Fast) && "Unknown calling convention!");
 
@@ -2717,7 +2720,7 @@ PPCTargetLowering::LowerCall_SVR4(SDValue Chain, SDValue Callee,
   // and restoring the callers stack pointer in this functions epilog. This is
   // done because by tail calling the called function might overwrite the value
   // in this function's (MF) stack pointer stack slot 0(SP).
-  if (PerformTailCallOpt && CallConv==CallingConv::Fast)
+  if (GuaranteedTailCallOpt && CallConv==CallingConv::Fast)
     MF.getInfo<PPCFunctionInfo>()->setHasFastCall();
   
   // Count how many bytes are to be pushed on the stack, including the linkage
@@ -2920,7 +2923,7 @@ PPCTargetLowering::LowerCall_Darwin(SDValue Chain, SDValue Callee,
   // and restoring the callers stack pointer in this functions epilog. This is
   // done because by tail calling the called function might overwrite the value
   // in this function's (MF) stack pointer stack slot 0(SP).
-  if (PerformTailCallOpt && CallConv==CallingConv::Fast)
+  if (GuaranteedTailCallOpt && CallConv==CallingConv::Fast)
     MF.getInfo<PPCFunctionInfo>()->setHasFastCall();
 
   unsigned nAltivecParamsAtEnd = 0;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
index cf81395..9c390ac 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCISelLowering.h
@@ -345,13 +345,6 @@ namespace llvm {
     /// the offset of the target addressing mode.
     virtual bool isLegalAddressImmediate(GlobalValue *GV) const;
 
-    virtual bool
-    IsEligibleForTailCallOptimization(SDValue Callee,
-                                      CallingConv::ID CalleeCC,
-                                      bool isVarArg,
-                                      const SmallVectorImpl<ISD::InputArg> &Ins,
-                                      SelectionDAG& DAG) const;
-
     virtual bool isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const;
     
     virtual EVT getOptimalMemOpType(uint64_t Size, unsigned Align,
@@ -365,6 +358,13 @@ namespace llvm {
     SDValue getFramePointerFrameIndex(SelectionDAG & DAG) const;
     SDValue getReturnAddrFrameIndex(SelectionDAG & DAG) const;
 
+    bool
+    IsEligibleForTailCallOptimization(SDValue Callee,
+                                      CallingConv::ID CalleeCC,
+                                      bool isVarArg,
+                                      const SmallVectorImpl<ISD::InputArg> &Ins,
+                                      SelectionDAG& DAG) const;
+
     SDValue EmitTailCallLoadFPAndRetAddr(SelectionDAG & DAG,
                                          int SPDiff,
                                          SDValue Chain,
@@ -431,7 +431,7 @@ namespace llvm {
 
     virtual SDValue
       LowerCall(SDValue Chain, SDValue Callee,
-                CallingConv::ID CallConv, bool isVarArg, bool isTailCall,
+                CallingConv::ID CallConv, bool isVarArg, bool &isTailCall,
                 const SmallVectorImpl<ISD::OutputArg> &Outs,
                 const SmallVectorImpl<ISD::InputArg> &Ins,
                 DebugLoc dl, SelectionDAG &DAG,
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
index af7d812..3db623a 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCInstrInfo.cpp
@@ -421,22 +421,30 @@ PPCInstrInfo::StoreRegToStackSlot(MachineFunction &MF,
                                          FrameIdx));
       return true;
     } else {
-      // FIXME: We use R0 here, because it isn't available for RA.  We need to
-      // store the CR in the low 4-bits of the saved value.  First, issue a MFCR
-      // to save all of the CRBits.
-      NewMIs.push_back(BuildMI(MF, DL, get(PPC::MFCR), PPC::R0));
+      // FIXME: We need a scatch reg here.  The trouble with using R0 is that
+      // it's possible for the stack frame to be so big the save location is
+      // out of range of immediate offsets, necessitating another register.
+      // We hack this on Darwin by reserving R2.  It's probably broken on Linux
+      // at the moment.
+
+      // We need to store the CR in the low 4-bits of the saved value.  First,
+      // issue a MFCR to save all of the CRBits.
+      unsigned ScratchReg = TM.getSubtargetImpl()->isDarwinABI() ? 
+                                                           PPC::R2 : PPC::R0;
+      NewMIs.push_back(BuildMI(MF, DL, get(PPC::MFCR), ScratchReg));
     
       // If the saved register wasn't CR0, shift the bits left so that they are
       // in CR0's slot.
       if (SrcReg != PPC::CR0) {
         unsigned ShiftBits = PPCRegisterInfo::getRegisterNumbering(SrcReg)*4;
-        // rlwinm r0, r0, ShiftBits, 0, 31.
-        NewMIs.push_back(BuildMI(MF, DL, get(PPC::RLWINM), PPC::R0)
-                       .addReg(PPC::R0).addImm(ShiftBits).addImm(0).addImm(31));
+        // rlwinm scratch, scratch, ShiftBits, 0, 31.
+        NewMIs.push_back(BuildMI(MF, DL, get(PPC::RLWINM), ScratchReg)
+                       .addReg(ScratchReg).addImm(ShiftBits)
+                       .addImm(0).addImm(31));
       }
     
       NewMIs.push_back(addFrameReference(BuildMI(MF, DL, get(PPC::STW))
-                                         .addReg(PPC::R0,
+                                         .addReg(ScratchReg,
                                                  getKillRegState(isKill)),
                                          FrameIdx));
     }
@@ -540,20 +548,28 @@ PPCInstrInfo::LoadRegFromStackSlot(MachineFunction &MF, DebugLoc DL,
     NewMIs.push_back(addFrameReference(BuildMI(MF, DL, get(PPC::LFS), DestReg),
                                        FrameIdx));
   } else if (RC == PPC::CRRCRegisterClass) {
-    // FIXME: We use R0 here, because it isn't available for RA.
-    NewMIs.push_back(addFrameReference(BuildMI(MF, DL, get(PPC::LWZ), PPC::R0),
-                                       FrameIdx));
+    // FIXME: We need a scatch reg here.  The trouble with using R0 is that
+    // it's possible for the stack frame to be so big the save location is
+    // out of range of immediate offsets, necessitating another register.
+    // We hack this on Darwin by reserving R2.  It's probably broken on Linux
+    // at the moment.
+    unsigned ScratchReg = TM.getSubtargetImpl()->isDarwinABI() ?
+                                                          PPC::R2 : PPC::R0;
+    NewMIs.push_back(addFrameReference(BuildMI(MF, DL, get(PPC::LWZ), 
+                                       ScratchReg), FrameIdx));
     
     // If the reloaded register isn't CR0, shift the bits right so that they are
     // in the right CR's slot.
     if (DestReg != PPC::CR0) {
       unsigned ShiftBits = PPCRegisterInfo::getRegisterNumbering(DestReg)*4;
       // rlwinm r11, r11, 32-ShiftBits, 0, 31.
-      NewMIs.push_back(BuildMI(MF, DL, get(PPC::RLWINM), PPC::R0)
-                    .addReg(PPC::R0).addImm(32-ShiftBits).addImm(0).addImm(31));
+      NewMIs.push_back(BuildMI(MF, DL, get(PPC::RLWINM), ScratchReg)
+                    .addReg(ScratchReg).addImm(32-ShiftBits).addImm(0)
+                    .addImm(31));
     }
     
-    NewMIs.push_back(BuildMI(MF, DL, get(PPC::MTCRF), DestReg).addReg(PPC::R0));
+    NewMIs.push_back(BuildMI(MF, DL, get(PPC::MTCRF), DestReg)
+                     .addReg(ScratchReg));
   } else if (RC == PPC::CRBITRCRegisterClass) {
    
     unsigned Reg = 0;
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMCAsmInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCMCAsmInfo.cpp
index c61627e..b37aee8 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMCAsmInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCMCAsmInfo.cpp
@@ -26,6 +26,9 @@ PPCMCAsmInfoDarwin::PPCMCAsmInfoDarwin(bool is64Bit) {
 }
 
 PPCLinuxMCAsmInfo::PPCLinuxMCAsmInfo(bool is64Bit) {
+  // ".comm align is in bytes but .align is pow-2."
+  AlignmentIsInBytes = false;
+
   CommentString = "#";
   GlobalPrefix = "";
   PrivateGlobalPrefix = ".L";
@@ -49,7 +52,6 @@ PPCLinuxMCAsmInfo::PPCLinuxMCAsmInfo(bool is64Bit) {
   AbsoluteEHSectionOffsets = false;
     
   ZeroDirective = "\t.space\t";
-  SetDirective = "\t.set";
   Data64bitsDirective = is64Bit ? "\t.quad\t" : 0;
   HasLCOMMDirective = true;
   AssemblerDialect = 0;           // Old-Style mnemonics.
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.cpp
deleted file mode 100644
index 4c14454..0000000
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.cpp
+++ /dev/null
@@ -1,152 +0,0 @@
-//===-- PPCMachOWriterInfo.cpp - Mach-O Writer Info for the PowerPC -------===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements Mach-O writer information for the PowerPC backend.
-//
-//===----------------------------------------------------------------------===//
-
-#include "PPCMachOWriterInfo.h"
-#include "PPCRelocations.h"
-#include "PPCTargetMachine.h"
-#include "llvm/CodeGen/MachORelocation.h"
-#include "llvm/Support/OutputBuffer.h"
-#include "llvm/Support/ErrorHandling.h"
-#include <cstdio>
-using namespace llvm;
-
-PPCMachOWriterInfo::PPCMachOWriterInfo(const PPCTargetMachine &TM)
-  : TargetMachOWriterInfo(TM.getTargetData()->getPointerSizeInBits() == 64 ?
-                          HDR_CPU_TYPE_POWERPC64 :
-                          HDR_CPU_TYPE_POWERPC,
-                          HDR_CPU_SUBTYPE_POWERPC_ALL) {}
-PPCMachOWriterInfo::~PPCMachOWriterInfo() {}
-
-/// GetTargetRelocation - For the MachineRelocation MR, convert it to one or
-/// more PowerPC MachORelocation(s), add the new relocations to the
-/// MachOSection, and rewrite the instruction at the section offset if required
-/// by that relocation type.
-unsigned PPCMachOWriterInfo::GetTargetRelocation(MachineRelocation &MR,
-                                                 unsigned FromIdx,
-                                                 unsigned ToAddr,
-                                                 unsigned ToIdx,
-                                                 OutputBuffer &RelocOut,
-                                                 OutputBuffer &SecOut,
-                                                 bool Scattered,
-                                                 bool isExtern) const {
-  unsigned NumRelocs = 0;
-  uint64_t Addr = 0;
-
-  // Get the address of whatever it is we're relocating, if possible.
-  if (!isExtern)
-    Addr = (uintptr_t)MR.getResultPointer() + ToAddr;
-
-  switch ((PPC::RelocationType)MR.getRelocationType()) {
-  default: llvm_unreachable("Unknown PPC relocation type!");
-  case PPC::reloc_absolute_low_ix:
-    llvm_unreachable("Unhandled PPC relocation type!");
-    break;
-  case PPC::reloc_vanilla:
-    {
-      // FIXME: need to handle 64 bit vanilla relocs
-      MachORelocation VANILLA(MR.getMachineCodeOffset(), ToIdx,
-                              false, 2, isExtern,
-                              PPC_RELOC_VANILLA,
-                              Scattered, (intptr_t)MR.getResultPointer());
-      ++NumRelocs;
-
-      if (Scattered) {
-        RelocOut.outword(VANILLA.getPackedFields());
-        RelocOut.outword(VANILLA.getAddress());
-      } else {
-        RelocOut.outword(VANILLA.getAddress());
-        RelocOut.outword(VANILLA.getPackedFields());
-      }
-      
-      intptr_t SymbolOffset;
-
-      if (Scattered)
-        SymbolOffset = Addr + MR.getConstantVal();
-      else
-        SymbolOffset = Addr;
-
-      printf("vanilla fixup: sec_%x[%x] = %x\n", FromIdx,
-             unsigned(MR.getMachineCodeOffset()),
-             unsigned(SymbolOffset));
-      SecOut.fixword(SymbolOffset, MR.getMachineCodeOffset());
-    }
-    break;
-  case PPC::reloc_pcrel_bx:
-    {
-      // FIXME: Presumably someday we will need to branch to other, non-extern
-      // functions too.  Need to figure out some way to distinguish between
-      // target is BB and target is function.
-      if (isExtern) {
-        MachORelocation BR24(MR.getMachineCodeOffset(), ToIdx, true, 2, 
-                             isExtern, PPC_RELOC_BR24, Scattered, 
-                             (intptr_t)MR.getMachineCodeOffset());
-        RelocOut.outword(BR24.getAddress());
-        RelocOut.outword(BR24.getPackedFields());
-        ++NumRelocs;
-      }
-
-      Addr -= MR.getMachineCodeOffset();
-      Addr >>= 2;
-      Addr &= 0xFFFFFF;
-      Addr <<= 2;
-      Addr |= (SecOut[MR.getMachineCodeOffset()] << 24);
-      Addr |= (SecOut[MR.getMachineCodeOffset()+3] & 0x3);
-      SecOut.fixword(Addr, MR.getMachineCodeOffset());
-      break;
-    }
-  case PPC::reloc_pcrel_bcx:
-    {
-      Addr -= MR.getMachineCodeOffset();
-      Addr &= 0xFFFC;
-
-      SecOut.fixhalf(Addr, MR.getMachineCodeOffset() + 2);
-      break;
-    }
-  case PPC::reloc_absolute_high:
-    {
-      MachORelocation HA16(MR.getMachineCodeOffset(), ToIdx, false, 2,
-                           isExtern, PPC_RELOC_HA16);
-      MachORelocation PAIR(Addr & 0xFFFF, 0xFFFFFF, false, 2, isExtern,
-                           PPC_RELOC_PAIR);
-      NumRelocs = 2;
-
-      RelocOut.outword(HA16.getRawAddress());
-      RelocOut.outword(HA16.getPackedFields());
-      RelocOut.outword(PAIR.getRawAddress());
-      RelocOut.outword(PAIR.getPackedFields());
-
-      Addr += 0x8000;
-
-      SecOut.fixhalf(Addr >> 16, MR.getMachineCodeOffset() + 2);
-      break;
-    }
-  case PPC::reloc_absolute_low:
-    {
-      MachORelocation LO16(MR.getMachineCodeOffset(), ToIdx, false, 2,
-                           isExtern, PPC_RELOC_LO16);
-      MachORelocation PAIR(Addr >> 16, 0xFFFFFF, false, 2, isExtern,
-                           PPC_RELOC_PAIR);
-      NumRelocs = 2;
-
-      RelocOut.outword(LO16.getRawAddress());
-      RelocOut.outword(LO16.getPackedFields());
-      RelocOut.outword(PAIR.getRawAddress());
-      RelocOut.outword(PAIR.getPackedFields());
-
-      SecOut.fixhalf(Addr, MR.getMachineCodeOffset() + 2);
-      break;
-    }
-  }
-
-  return NumRelocs;
-}
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.h
deleted file mode 100644
index d46334d..0000000
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCMachOWriterInfo.h
+++ /dev/null
@@ -1,55 +0,0 @@
-//===-- PPCMachOWriterInfo.h - Mach-O Writer Info for PowerPC ---*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file implements Mach-O writer information for the PowerPC backend.
-//
-//===----------------------------------------------------------------------===//
-
-#ifndef PPC_MACHO_WRITER_INFO_H
-#define PPC_MACHO_WRITER_INFO_H
-
-#include "llvm/Target/TargetMachOWriterInfo.h"
-
-namespace llvm {
-
-  // Forward declarations
-  class MachineRelocation;
-  class OutputBuffer;
-  class PPCTargetMachine;
-
-  class PPCMachOWriterInfo : public TargetMachOWriterInfo {
-  public:
-    PPCMachOWriterInfo(const PPCTargetMachine &TM);
-    virtual ~PPCMachOWriterInfo();
-
-    virtual unsigned GetTargetRelocation(MachineRelocation &MR,
-                                         unsigned FromIdx,
-                                         unsigned ToAddr,
-                                         unsigned ToIdx,
-                                         OutputBuffer &RelocOut,
-                                         OutputBuffer &SecOut,
-                                         bool Scattered, bool Extern) const;
-
-    // Constants for the relocation r_type field.
-    // See <mach-o/ppc/reloc.h>
-    enum {
-      PPC_RELOC_VANILLA, // generic relocation
-      PPC_RELOC_PAIR,    // the second relocation entry of a pair
-      PPC_RELOC_BR14,    // 14 bit branch displacement to word address
-      PPC_RELOC_BR24,    // 24 bit branch displacement to word address
-      PPC_RELOC_HI16,    // a PAIR follows with the low 16 bits
-      PPC_RELOC_LO16,    // a PAIR follows with the high 16 bits
-      PPC_RELOC_HA16,    // a PAIR follows, which is sign extended to 32b
-      PPC_RELOC_LO14     // LO16 with low 2 bits implicitly zero
-    };
-  };
-
-} // end llvm namespace
-
-#endif // PPC_MACHO_WRITER_INFO_H
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
index 0c3c8eb..0b509ac 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCRegisterInfo.cpp
@@ -406,7 +406,7 @@ PPCRegisterInfo::getCalleeSavedRegClasses(const MachineFunction *MF) const {
 static bool needsFP(const MachineFunction &MF) {
   const MachineFrameInfo *MFI = MF.getFrameInfo();
   return NoFramePointerElim || MFI->hasVarSizedObjects() ||
-    (PerformTailCallOpt && MF.getInfo<PPCFunctionInfo>()->hasFastCall());
+    (GuaranteedTailCallOpt && MF.getInfo<PPCFunctionInfo>()->hasFastCall());
 }
 
 static bool spillsCR(const MachineFunction &MF) {
@@ -427,6 +427,12 @@ BitVector PPCRegisterInfo::getReservedRegs(const MachineFunction &MF) const {
     Reserved.set(PPC::R2);  // System-reserved register
     Reserved.set(PPC::R13); // Small Data Area pointer register
   }
+  // Reserve R2 on Darwin to hack around the problem of save/restore of CR
+  // when the stack frame is too big to address directly; we need two regs.
+  // This is a hack.
+  if (Subtarget.isDarwinABI()) {
+    Reserved.set(PPC::R2);
+  }
   
   // On PPC64, r13 is the thread pointer. Never allocate this register.
   // Note that this is over conservative, as it also prevents allocation of R31
@@ -447,6 +453,12 @@ BitVector PPCRegisterInfo::getReservedRegs(const MachineFunction &MF) const {
     if (Subtarget.isSVR4ABI()) {
       Reserved.set(PPC::X2);
     }
+    // Reserve R2 on Darwin to hack around the problem of save/restore of CR
+    // when the stack frame is too big to address directly; we need two regs.
+    // This is a hack.
+    if (Subtarget.isDarwinABI()) {
+      Reserved.set(PPC::X2);
+    }
   }
 
   if (needsFP(MF))
@@ -486,7 +498,7 @@ static bool MustSaveLR(const MachineFunction &MF, unsigned LR) {
 void PPCRegisterInfo::
 eliminateCallFramePseudoInstr(MachineFunction &MF, MachineBasicBlock &MBB,
                               MachineBasicBlock::iterator I) const {
-  if (PerformTailCallOpt && I->getOpcode() == PPC::ADJCALLSTACKUP) {
+  if (GuaranteedTailCallOpt && I->getOpcode() == PPC::ADJCALLSTACKUP) {
     // Add (actually subtract) back the amount the callee popped on return.
     if (int CalleeAmt =  I->getOperand(1).getImm()) {
       bool is64Bit = Subtarget.isPPC64();
@@ -724,7 +736,7 @@ PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
   }
   // Take into account whether it's an add or mem instruction
   unsigned OffsetOperandNo = (FIOperandNo == 2) ? 1 : 2;
-  if (MI.getOpcode() == TargetInstrInfo::INLINEASM)
+  if (MI.isInlineAsm())
     OffsetOperandNo = FIOperandNo-1;
 
   // Get the frame index.
@@ -817,7 +829,7 @@ PPCRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
   //   addi 0:rA 1:rB, 2, imm ==> add 0:rA, 1:rB, 2:r0
   unsigned OperandBase;
 
-  if (OpC != TargetInstrInfo::INLINEASM) {
+  if (OpC != TargetOpcode::INLINEASM) {
     assert(ImmToIdxMap.count(OpC) &&
            "No indexed form of load or store available!");
     unsigned NewOpcode = ImmToIdxMap.find(OpC)->second;
@@ -1050,7 +1062,7 @@ PPCRegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
 
   // Reserve stack space to move the linkage area to in case of a tail call.
   int TCSPDelta = 0;
-  if (PerformTailCallOpt && (TCSPDelta = FI->getTailCallSPDelta()) < 0) {
+  if (GuaranteedTailCallOpt && (TCSPDelta = FI->getTailCallSPDelta()) < 0) {
     MF.getFrameInfo()->CreateFixedObject(-1 * TCSPDelta, TCSPDelta,
                                          true, false);
   }
@@ -1160,7 +1172,7 @@ PPCRegisterInfo::processFunctionBeforeFrameFinalized(MachineFunction &MF)
 
   // Take into account stack space reserved for tail calls.
   int TCSPDelta = 0;
-  if (PerformTailCallOpt && (TCSPDelta = PFI->getTailCallSPDelta()) < 0) {
+  if (GuaranteedTailCallOpt && (TCSPDelta = PFI->getTailCallSPDelta()) < 0) {
     LowerBound = TCSPDelta;
   }
 
@@ -1575,7 +1587,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
     // The loaded (or persistent) stack pointer value is offset by the 'stwu'
     // on entry to the function.  Add this offset back now.
     if (!isPPC64) {
-      // If this function contained a fastcc call and PerformTailCallOpt is
+      // If this function contained a fastcc call and GuaranteedTailCallOpt is
       // enabled (=> hasFastCall()==true) the fastcc call might contain a tail
       // call which invalidates the stack pointer value in SP(0). So we use the
       // value of R31 in this case.
@@ -1654,7 +1666,7 @@ void PPCRegisterInfo::emitEpilogue(MachineFunction &MF,
 
   // Callee pop calling convention. Pop parameter/linkage area. Used for tail
   // call optimization
-  if (PerformTailCallOpt && RetOpcode == PPC::BLR &&
+  if (GuaranteedTailCallOpt && RetOpcode == PPC::BLR &&
       MF.getFunction()->getCallingConv() == CallingConv::Fast) {
      PPCFunctionInfo *FI = MF.getInfo<PPCFunctionInfo>();
      unsigned CallerAllocatedAmt = FI->getMinReservedArea();
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.cpp
index f75e781..40914ba 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCSubtarget.cpp
@@ -130,7 +130,7 @@ bool PPCSubtarget::hasLazyResolverStub(const GlobalValue *GV,
     return false;
   // If symbol visibility is hidden, the extra load is not needed if
   // the symbol is definitely defined in the current translation unit.
-  bool isDecl = GV->isDeclaration() && !GV->hasNotBeenReadFromBitcode();
+  bool isDecl = GV->isDeclaration() && !GV->isMaterializable();
   if (GV->hasHiddenVisibility() && !isDecl && !GV->hasCommonLinkage())
     return false;
   return GV->hasWeakLinkage() || GV->hasLinkOnceLinkage() ||
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
index c7f7882..22eecd4 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.cpp
@@ -45,7 +45,7 @@ PPCTargetMachine::PPCTargetMachine(const Target &T, const std::string &TT,
     Subtarget(TT, FS, is64Bit),
     DataLayout(Subtarget.getTargetDataString()), InstrInfo(*this),
     FrameInfo(*this, is64Bit), JITInfo(*this, is64Bit), TLInfo(*this),
-    InstrItins(Subtarget.getInstrItineraryData()), MachOWriterInfo(*this) {
+    InstrItins(Subtarget.getInstrItineraryData()) {
 
   if (getRelocationModel() == Reloc::Default) {
     if (Subtarget.isDarwin())
@@ -91,33 +91,6 @@ bool PPCTargetMachine::addPreEmitPass(PassManagerBase &PM,
 
 bool PPCTargetMachine::addCodeEmitter(PassManagerBase &PM,
                                       CodeGenOpt::Level OptLevel,
-                                      MachineCodeEmitter &MCE) {
-  // The JIT should use the static relocation model in ppc32 mode, PIC in ppc64.
-  // FIXME: This should be moved to TargetJITInfo!!
-  if (Subtarget.isPPC64()) {
-    // We use PIC codegen in ppc64 mode, because otherwise we'd have to use many
-    // instructions to materialize arbitrary global variable + function +
-    // constant pool addresses.
-    setRelocationModel(Reloc::PIC_);
-    // Temporary workaround for the inability of PPC64 JIT to handle jump
-    // tables.
-    DisableJumpTables = true;      
-  } else {
-    setRelocationModel(Reloc::Static);
-  }
-  
-  // Inform the subtarget that we are in JIT mode.  FIXME: does this break macho
-  // writing?
-  Subtarget.SetJITMode();
-  
-  // Machine code emitter pass for PowerPC.
-  PM.add(createPPCCodeEmitterPass(*this, MCE));
-
-  return false;
-}
-
-bool PPCTargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                      CodeGenOpt::Level OptLevel,
                                       JITCodeEmitter &JCE) {
   // The JIT should use the static relocation model in ppc32 mode, PIC in ppc64.
   // FIXME: This should be moved to TargetJITInfo!!
@@ -143,57 +116,6 @@ bool PPCTargetMachine::addCodeEmitter(PassManagerBase &PM,
   return false;
 }
 
-bool PPCTargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                      CodeGenOpt::Level OptLevel,
-                                      ObjectCodeEmitter &OCE) {
-  // The JIT should use the static relocation model in ppc32 mode, PIC in ppc64.
-  // FIXME: This should be moved to TargetJITInfo!!
-  if (Subtarget.isPPC64()) {
-    // We use PIC codegen in ppc64 mode, because otherwise we'd have to use many
-    // instructions to materialize arbitrary global variable + function +
-    // constant pool addresses.
-    setRelocationModel(Reloc::PIC_);
-    // Temporary workaround for the inability of PPC64 JIT to handle jump
-    // tables.
-    DisableJumpTables = true;      
-  } else {
-    setRelocationModel(Reloc::Static);
-  }
-  
-  // Inform the subtarget that we are in JIT mode.  FIXME: does this break macho
-  // writing?
-  Subtarget.SetJITMode();
-  
-  // Machine code emitter pass for PowerPC.
-  PM.add(createPPCObjectCodeEmitterPass(*this, OCE));
-
-  return false;
-}
-
-bool PPCTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            MachineCodeEmitter &MCE) {
-  // Machine code emitter pass for PowerPC.
-  PM.add(createPPCCodeEmitterPass(*this, MCE));
-  return false;
-}
-
-bool PPCTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            JITCodeEmitter &JCE) {
-  // Machine code emitter pass for PowerPC.
-  PM.add(createPPCJITCodeEmitterPass(*this, JCE));
-  return false;
-}
-
-bool PPCTargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            ObjectCodeEmitter &OCE) {
-  // Machine code emitter pass for PowerPC.
-  PM.add(createPPCObjectCodeEmitterPass(*this, OCE));
-  return false;
-}
-
 /// getLSDAEncoding - Returns the LSDA pointer encoding. The choices are 4-byte,
 /// 8-byte, and target default. The CIE is hard-coded to indicate that the LSDA
 /// pointer in the FDE section is an "sdata4", and should be encoded as a 4-byte
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.h b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.h
index 4afcb23..a654435 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.h
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/PPCTargetMachine.h
@@ -19,7 +19,6 @@
 #include "PPCJITInfo.h"
 #include "PPCInstrInfo.h"
 #include "PPCISelLowering.h"
-#include "PPCMachOWriterInfo.h"
 #include "llvm/Target/TargetMachine.h"
 #include "llvm/Target/TargetData.h"
 
@@ -37,7 +36,6 @@ class PPCTargetMachine : public LLVMTargetMachine {
   PPCJITInfo          JITInfo;
   PPCTargetLowering   TLInfo;
   InstrItineraryData  InstrItins;
-  PPCMachOWriterInfo  MachOWriterInfo;
 
 public:
   PPCTargetMachine(const Target &T, const std::string &TT,
@@ -58,9 +56,6 @@ public:
   virtual const InstrItineraryData getInstrItineraryData() const {  
     return InstrItins;
   }
-  virtual const PPCMachOWriterInfo *getMachOWriterInfo() const {
-    return &MachOWriterInfo;
-  }
 
   /// getLSDAEncoding - Returns the LSDA pointer encoding. The choices are
   /// 4-byte, 8-byte, and target default. The CIE is hard-coded to indicate that
@@ -78,20 +73,7 @@ public:
   virtual bool addInstSelector(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addPreEmitPass(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              MachineCodeEmitter &MCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
                               JITCodeEmitter &JCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              ObjectCodeEmitter &OCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    MachineCodeEmitter &MCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    JITCodeEmitter &JCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    ObjectCodeEmitter &OCE);
   virtual bool getEnableTailMergeDefault() const;
 };
 
diff --git a/libclamav/c++/llvm/lib/Target/PowerPC/README.txt b/libclamav/c++/llvm/lib/Target/PowerPC/README.txt
index 8f265cf..e49bda0 100644
--- a/libclamav/c++/llvm/lib/Target/PowerPC/README.txt
+++ b/libclamav/c++/llvm/lib/Target/PowerPC/README.txt
@@ -895,3 +895,20 @@ define double @test_FNEG_sel(double %A, double %B, double %C) {
         ret double %E
 }
 
+//===----------------------------------------------------------------------===//
+The save/restore sequence for CR in prolog/epilog is terrible:
+- Each CR subreg is saved individually, rather than doing one save as a unit.
+- On Darwin, the save is done after the decrement of SP, which means the offset
+from SP of the save slot can be too big for a store instruction, which means we
+need an additional register (currently hacked in 96015+96020; the solution there
+is correct, but poor).
+- On SVR4 the same thing can happen, and I don't think saving before the SP
+decrement is safe on that target, as there is no red zone.  This is currently
+broken AFAIK, although it's not a target I can exercise.
+The following demonstrates the problem:
+extern void bar(char *p);
+void foo() {
+  char x[100000];
+  bar(x);
+  __asm__("" ::: "cr2");
+}
diff --git a/libclamav/c++/llvm/lib/Target/README.txt b/libclamav/c++/llvm/lib/Target/README.txt
index 716085c..4fd46a8 100644
--- a/libclamav/c++/llvm/lib/Target/README.txt
+++ b/libclamav/c++/llvm/lib/Target/README.txt
@@ -1751,22 +1751,71 @@ from gcc.
 Missed instcombine transformation:
 define i32 @a(i32 %x) nounwind readnone {
 entry:
-  %shr = lshr i32 %x, 5                           ; <i32> [#uses=1]
-  %xor = xor i32 %shr, 67108864                   ; <i32> [#uses=1]
-  %sub = add i32 %xor, -67108864                  ; <i32> [#uses=1]
+  %rem = srem i32 %x, 32
+  %shl = shl i32 1, %rem
+  ret i32 %shl
+}
+
+The srem can be transformed to an and because if x is negative, the shift is
+undefined. Testcase derived from gcc.
+
+//===---------------------------------------------------------------------===//
+
+Missed instcombine/dagcombine transformation:
+define i32 @a(i32 %x, i32 %y) nounwind readnone {
+entry:
+  %mul = mul i32 %y, -8
+  %sub = sub i32 %x, %mul
   ret i32 %sub
 }
 
-This function is equivalent to "ashr i32 %x, 5".  Testcase derived from gcc.
+Should compile to something like x+y*8, but currently compiles to an
+inefficient result.  Testcase derived from gcc.
+
+//===---------------------------------------------------------------------===//
+
+Missed instcombine/dagcombine transformation:
+define void @lshift_lt(i8 zeroext %a) nounwind {
+entry:
+  %conv = zext i8 %a to i32
+  %shl = shl i32 %conv, 3
+  %cmp = icmp ult i32 %shl, 33
+  br i1 %cmp, label %if.then, label %if.end
+
+if.then:
+  tail call void @bar() nounwind
+  ret void
+
+if.end:
+  ret void
+}
+declare void @bar() nounwind
+
+The shift should be eliminated.  Testcase derived from gcc.
 
 //===---------------------------------------------------------------------===//
 
-isSafeToLoadUnconditionally should allow a GEP of a global/alloca with constant
-indicies within the bounds of the allocated object. Reduced example:
+These compile into different code, one gets recognized as a switch and the
+other doesn't due to phase ordering issues (PR6212):
 
-const int a[] = {3,6};
-int b(int y) { int* x = y ? &a[0] : &a[1]; return *x; }
+int test1(int mainType, int subType) {
+  if (mainType == 7)
+    subType = 4;
+  else if (mainType == 9)
+    subType = 6;
+  else if (mainType == 11)
+    subType = 9;
+  return subType;
+}
 
-All the loads should be eliminated.  Testcase derived from gcc.
+int test2(int mainType, int subType) {
+  if (mainType == 7)
+    subType = 4;
+  if (mainType == 9)
+    subType = 6;
+  if (mainType == 11)
+    subType = 9;
+  return subType;
+}
 
 //===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp b/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
index 7cc4fd1..2094cc9 100644
--- a/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
+++ b/libclamav/c++/llvm/lib/Target/SubtargetFeature.cpp
@@ -67,7 +67,7 @@ static void Split(std::vector<std::string> &V, const std::string &S) {
   while (true) {
     // Find the next comma
     size_t Comma = S.find(',', Pos);
-    // If no comma found then the the rest of the string is used
+    // If no comma found then the rest of the string is used
     if (Comma == std::string::npos) {
       // Add string to vector
       V.push_back(S.substr(Pos));
diff --git a/libclamav/c++/llvm/lib/Target/TargetAsmLexer.cpp b/libclamav/c++/llvm/lib/Target/TargetAsmLexer.cpp
index 0ae6c14..d4893ff 100644
--- a/libclamav/c++/llvm/lib/Target/TargetAsmLexer.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetAsmLexer.cpp
@@ -10,5 +10,5 @@
 #include "llvm/Target/TargetAsmLexer.h"
 using namespace llvm;
 
-TargetAsmLexer::TargetAsmLexer(const Target &T) : TheTarget(T) {}
+TargetAsmLexer::TargetAsmLexer(const Target &T) : TheTarget(T), Lexer(NULL) {}
 TargetAsmLexer::~TargetAsmLexer() {}
diff --git a/libclamav/c++/llvm/lib/Target/TargetMachOWriterInfo.cpp b/libclamav/c++/llvm/lib/Target/TargetMachOWriterInfo.cpp
deleted file mode 100644
index d608119..0000000
--- a/libclamav/c++/llvm/lib/Target/TargetMachOWriterInfo.cpp
+++ /dev/null
@@ -1,25 +0,0 @@
-//===-- llvm/Target/TargetMachOWriterInfo.h - MachO Writer Info -*- C++ -*-===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// This file defines the TargetMachOWriterInfo class.
-//
-//===----------------------------------------------------------------------===//
-
-#include "llvm/Target/TargetMachOWriterInfo.h"
-#include "llvm/CodeGen/MachineRelocation.h"
-using namespace llvm;
-
-TargetMachOWriterInfo::~TargetMachOWriterInfo() {}
-
-MachineRelocation
-TargetMachOWriterInfo::GetJTRelocation(unsigned Offset,
-                                       MachineBasicBlock *MBB) const {
-  // FIXME: do something about PIC
-  return MachineRelocation::getBB(Offset, MachineRelocation::VANILLA, MBB);
-}
diff --git a/libclamav/c++/llvm/lib/Target/TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
index fec59b5..88871e3 100644
--- a/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetMachine.cpp
@@ -40,7 +40,7 @@ namespace llvm {
   bool UnwindTablesMandatory;
   Reloc::Model RelocationModel;
   CodeModel::Model CMModel;
-  bool PerformTailCallOpt;
+  bool GuaranteedTailCallOpt;
   unsigned StackAlignment;
   bool RealignStack;
   bool DisableJumpTables;
@@ -173,9 +173,9 @@ DefCodeModel("code-model",
                "Large code model"),
     clEnumValEnd));
 static cl::opt<bool, true>
-EnablePerformTailCallOpt("tailcallopt",
-  cl::desc("Turn on tail call optimization."),
-  cl::location(PerformTailCallOpt),
+EnableGuaranteedTailCallOpt("tailcallopt",
+  cl::desc("Turn fastcc calls into tail calls by (potentially) changing ABI."),
+  cl::location(GuaranteedTailCallOpt),
   cl::init(false));
 static cl::opt<unsigned, true>
 OverrideStackAlignment("stack-alignment",
diff --git a/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
index fac67e2..52983ff 100644
--- a/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/TargetRegisterInfo.cpp
@@ -86,9 +86,10 @@ BitVector TargetRegisterInfo::getAllocatableSet(const MachineFunction &MF,
 /// getFrameIndexOffset - Returns the displacement from the frame register to
 /// the stack frame of the specified index. This is the default implementation
 /// which is overridden for some targets.
-int TargetRegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
+int TargetRegisterInfo::getFrameIndexOffset(const MachineFunction &MF,
+                                            int FI) const {
   const TargetFrameInfo &TFI = *MF.getTarget().getFrameInfo();
-  MachineFrameInfo *MFI = MF.getFrameInfo();
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
   return MFI->getObjectOffset(FI) + MFI->getStackSize() -
     TFI.getOffsetOfLocalArea() + MFI->getOffsetAdjustment();
 }
@@ -96,7 +97,7 @@ int TargetRegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
 /// getInitialFrameState - Returns a list of machine moves that are assumed
 /// on entry to a function.
 void
-TargetRegisterInfo::getInitialFrameState(std::vector<MachineMove> &Moves) const {
+TargetRegisterInfo::getInitialFrameState(std::vector<MachineMove> &Moves) const{
   // Default is to do nothing.
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmLexer.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmLexer.cpp
index 3998b08..a58f58e 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmLexer.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmLexer.cpp
@@ -22,13 +22,12 @@ namespace {
   
 class X86AsmLexer : public TargetAsmLexer {
   const MCAsmInfo &AsmInfo;
-  MCAsmLexer *Lexer;
   
   bool tentativeIsValid;
   AsmToken tentativeToken;
   
   const AsmToken &lexTentative() {
-    tentativeToken = Lexer->Lex();
+    tentativeToken = getLexer()->Lex();
     tentativeIsValid = true;
     return tentativeToken;
   }
@@ -39,7 +38,7 @@ class X86AsmLexer : public TargetAsmLexer {
       return tentativeToken;
     }
     else {
-      return Lexer->Lex();
+      return getLexer()->Lex();
     }
   }
   
@@ -64,20 +63,16 @@ protected:
   }
 public:
   X86AsmLexer(const Target &T, const MCAsmInfo &MAI)
-    : TargetAsmLexer(T), AsmInfo(MAI), Lexer(NULL), tentativeIsValid(false) {
-  }
-  
-  void InstallLexer(MCAsmLexer &L) {
-    Lexer = &L;
+    : TargetAsmLexer(T), AsmInfo(MAI), tentativeIsValid(false) {
   }
 };
 
 }
 
-static unsigned MatchRegisterName(const StringRef &Name);
+static unsigned MatchRegisterName(StringRef Name);
 
 AsmToken X86AsmLexer::LexTokenATT() {
-  const AsmToken &lexedToken = lexDefinite();
+  const AsmToken lexedToken = lexDefinite();
   
   switch (lexedToken.getKind()) {
   default:
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp
index 19fbf85..84d7bb7 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmParser/X86AsmParser.cpp
@@ -10,6 +10,7 @@
 #include "llvm/Target/TargetAsmParser.h"
 #include "X86.h"
 #include "llvm/ADT/SmallVector.h"
+#include "llvm/ADT/StringSwitch.h"
 #include "llvm/ADT/Twine.h"
 #include "llvm/MC/MCStreamer.h"
 #include "llvm/MC/MCExpr.h"
@@ -67,7 +68,7 @@ public:
 /// @name Auto-generated Match Functions
 /// {  
 
-static unsigned MatchRegisterName(const StringRef &Name);
+static unsigned MatchRegisterName(StringRef Name);
 
 /// }
 
@@ -172,8 +173,25 @@ struct X86Operand : public MCParsedAsmOperand {
   
   bool isMem() const { return Kind == Memory; }
 
+  bool isAbsMem() const {
+    return Kind == Memory && !getMemSegReg() && !getMemBaseReg() &&
+      !getMemIndexReg() && getMemScale() == 1;
+  }
+
+  bool isNoSegMem() const {
+    return Kind == Memory && !getMemSegReg();
+  }
+
   bool isReg() const { return Kind == Register; }
 
+  void addExpr(MCInst &Inst, const MCExpr *Expr) const {
+    // Add as immediates when possible.
+    if (const MCConstantExpr *CE = dyn_cast<MCConstantExpr>(Expr))
+      Inst.addOperand(MCOperand::CreateImm(CE->getValue()));
+    else
+      Inst.addOperand(MCOperand::CreateExpr(Expr));
+  }
+
   void addRegOperands(MCInst &Inst, unsigned N) const {
     assert(N == 1 && "Invalid number of operands!");
     Inst.addOperand(MCOperand::CreateReg(getReg()));
@@ -181,26 +199,35 @@ struct X86Operand : public MCParsedAsmOperand {
 
   void addImmOperands(MCInst &Inst, unsigned N) const {
     assert(N == 1 && "Invalid number of operands!");
-    Inst.addOperand(MCOperand::CreateExpr(getImm()));
+    addExpr(Inst, getImm());
   }
 
   void addImmSExt8Operands(MCInst &Inst, unsigned N) const {
     // FIXME: Support user customization of the render method.
     assert(N == 1 && "Invalid number of operands!");
-    Inst.addOperand(MCOperand::CreateExpr(getImm()));
+    addExpr(Inst, getImm());
   }
 
   void addMemOperands(MCInst &Inst, unsigned N) const {
-    assert((N == 4 || N == 5) && "Invalid number of operands!");
-
+    assert((N == 5) && "Invalid number of operands!");
     Inst.addOperand(MCOperand::CreateReg(getMemBaseReg()));
     Inst.addOperand(MCOperand::CreateImm(getMemScale()));
     Inst.addOperand(MCOperand::CreateReg(getMemIndexReg()));
+    addExpr(Inst, getMemDisp());
+    Inst.addOperand(MCOperand::CreateReg(getMemSegReg()));
+  }
+
+  void addAbsMemOperands(MCInst &Inst, unsigned N) const {
+    assert((N == 1) && "Invalid number of operands!");
     Inst.addOperand(MCOperand::CreateExpr(getMemDisp()));
+  }
 
-    // FIXME: What a hack.
-    if (N == 5)
-      Inst.addOperand(MCOperand::CreateReg(getMemSegReg()));
+  void addNoSegMemOperands(MCInst &Inst, unsigned N) const {
+    assert((N == 4) && "Invalid number of operands!");
+    Inst.addOperand(MCOperand::CreateReg(getMemBaseReg()));
+    Inst.addOperand(MCOperand::CreateImm(getMemScale()));
+    Inst.addOperand(MCOperand::CreateReg(getMemIndexReg()));
+    addExpr(Inst, getMemDisp());
   }
 
   static X86Operand *CreateToken(StringRef Str, SMLoc Loc) {
@@ -222,10 +249,24 @@ struct X86Operand : public MCParsedAsmOperand {
     return Res;
   }
 
+  /// Create an absolute memory operand.
+  static X86Operand *CreateMem(const MCExpr *Disp, SMLoc StartLoc,
+                               SMLoc EndLoc) {
+    X86Operand *Res = new X86Operand(Memory, StartLoc, EndLoc);
+    Res->Mem.SegReg   = 0;
+    Res->Mem.Disp     = Disp;
+    Res->Mem.BaseReg  = 0;
+    Res->Mem.IndexReg = 0;
+    Res->Mem.Scale    = 1;
+    return Res;
+  }
+
+  /// Create a generalized memory operand.
   static X86Operand *CreateMem(unsigned SegReg, const MCExpr *Disp,
                                unsigned BaseReg, unsigned IndexReg,
                                unsigned Scale, SMLoc StartLoc, SMLoc EndLoc) {
-    // We should never just have a displacement, that would be an immediate.
+    // We should never just have a displacement, that should be parsed as an
+    // absolute memory operand.
     assert((SegReg || BaseReg || IndexReg) && "Invalid memory operand!");
 
     // The scale should always be one of {1,2,4,8}.
@@ -259,6 +300,42 @@ bool X86ATTAsmParser::ParseRegister(unsigned &RegNo,
   // FIXME: Validate register for the current architecture; we have to do
   // validation later, so maybe there is no need for this here.
   RegNo = MatchRegisterName(Tok.getString());
+  
+  // Parse %st(1) and "%st" as "%st(0)"
+  if (RegNo == 0 && Tok.getString() == "st") {
+    RegNo = X86::ST0;
+    EndLoc = Tok.getLoc();
+    Parser.Lex(); // Eat 'st'
+    
+    // Check to see if we have '(4)' after %st.
+    if (getLexer().isNot(AsmToken::LParen))
+      return false;
+    // Lex the paren.
+    getParser().Lex();
+
+    const AsmToken &IntTok = Parser.getTok();
+    if (IntTok.isNot(AsmToken::Integer))
+      return Error(IntTok.getLoc(), "expected stack index");
+    switch (IntTok.getIntVal()) {
+    case 0: RegNo = X86::ST0; break;
+    case 1: RegNo = X86::ST1; break;
+    case 2: RegNo = X86::ST2; break;
+    case 3: RegNo = X86::ST3; break;
+    case 4: RegNo = X86::ST4; break;
+    case 5: RegNo = X86::ST5; break;
+    case 6: RegNo = X86::ST6; break;
+    case 7: RegNo = X86::ST7; break;
+    default: return Error(IntTok.getLoc(), "invalid stack index");
+    }
+    
+    if (getParser().Lex().isNot(AsmToken::RParen))
+      return Error(Parser.getTok().getLoc(), "expected ')'");
+    
+    EndLoc = Tok.getLoc();
+    Parser.Lex(); // Eat ')'
+    return false;
+  }
+  
   if (RegNo == 0)
     return Error(Tok.getLoc(), "invalid register name");
 
@@ -312,7 +389,7 @@ X86Operand *X86ATTAsmParser::ParseMemOperand() {
     if (getLexer().isNot(AsmToken::LParen)) {
       // Unless we have a segment register, treat this as an immediate.
       if (SegReg == 0)
-        return X86Operand::CreateImm(Disp, MemStart, ExprEnd);
+        return X86Operand::CreateMem(Disp, MemStart, ExprEnd);
       return X86Operand::CreateMem(SegReg, Disp, 0, 0, 1, MemStart, ExprEnd);
     }
     
@@ -339,7 +416,7 @@ X86Operand *X86ATTAsmParser::ParseMemOperand() {
       if (getLexer().isNot(AsmToken::LParen)) {
         // Unless we have a segment register, treat this as an immediate.
         if (SegReg == 0)
-          return X86Operand::CreateImm(Disp, LParenLoc, ExprEnd);
+          return X86Operand::CreateMem(Disp, LParenLoc, ExprEnd);
         return X86Operand::CreateMem(SegReg, Disp, 0, 0, 1, MemStart, ExprEnd);
       }
       
@@ -424,8 +501,20 @@ X86Operand *X86ATTAsmParser::ParseMemOperand() {
 bool X86ATTAsmParser::
 ParseInstruction(const StringRef &Name, SMLoc NameLoc,
                  SmallVectorImpl<MCParsedAsmOperand*> &Operands) {
-
-  Operands.push_back(X86Operand::CreateToken(Name, NameLoc));
+  // FIXME: Hack to recognize "sal..." and "rep..." for now. We need a way to
+  // represent alternative syntaxes in the .td file, without requiring
+  // instruction duplication.
+  StringRef PatchedName = StringSwitch<StringRef>(Name)
+    .Case("sal", "shl")
+    .Case("salb", "shlb")
+    .Case("sall", "shll")
+    .Case("salq", "shlq")
+    .Case("salw", "shlw")
+    .Case("repe", "rep")
+    .Case("repz", "rep")
+    .Case("repnz", "repne")
+    .Default(Name);
+  Operands.push_back(X86Operand::CreateToken(PatchedName, NameLoc));
 
   if (getLexer().isNot(AsmToken::EndOfStatement)) {
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
index 804dbb9..1a35a49 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.cpp
@@ -18,17 +18,22 @@
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/MC/MCExpr.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/Format.h"
 #include "llvm/Support/FormattedStream.h"
 #include "X86GenInstrNames.inc"
 using namespace llvm;
 
 // Include the auto-generated portion of the assembly writer.
 #define MachineInstr MCInst
-#define NO_ASM_WRITER_BOILERPLATE
+#define GET_INSTRUCTION_NAME
 #include "X86GenAsmWriter.inc"
 #undef MachineInstr
 
 void X86ATTInstPrinter::printInst(const MCInst *MI) { printInstruction(MI); }
+StringRef X86ATTInstPrinter::getOpcodeName(unsigned Opcode) const {
+  return getInstructionName(Opcode);
+}
+
 
 void X86ATTInstPrinter::printSSECC(const MCInst *MI, unsigned Op) {
   switch (MI->getOperand(Op).getImm()) {
@@ -66,6 +71,10 @@ void X86ATTInstPrinter::printOperand(const MCInst *MI, unsigned OpNo) {
     O << '%' << getRegisterName(Op.getReg());
   } else if (Op.isImm()) {
     O << '$' << Op.getImm();
+    
+    if (CommentStream && (Op.getImm() > 255 || Op.getImm() < -256))
+      *CommentStream << format("imm = 0x%X\n", Op.getImm());
+    
   } else {
     assert(Op.isExpr() && "unknown operand kind in printOperand");
     O << '$' << *Op.getExpr();
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
index 3180618..d109a07 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86ATTInstPrinter.h
@@ -26,11 +26,12 @@ public:
 
   
   virtual void printInst(const MCInst *MI);
-  
+  virtual StringRef getOpcodeName(unsigned Opcode) const;
+
   // Autogenerated by tblgen.
   void printInstruction(const MCInst *MI);
   static const char *getRegisterName(unsigned RegNo);
-
+  static const char *getInstructionName(unsigned Opcode);
 
   void printOperand(const MCInst *MI, unsigned OpNo);
   void printMemReference(const MCInst *MI, unsigned Op);
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
index 9390ff3..dfcee79 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.cpp
@@ -8,12 +8,10 @@
 //===----------------------------------------------------------------------===//
 //
 // This file contains a printer that converts from our internal representation
-// of machine-dependent LLVM code to AT&T format assembly
-// language. This printer is the output mechanism used by `llc'.
+// of machine-dependent LLVM code to X86 machine code.
 //
 //===----------------------------------------------------------------------===//
 
-#define DEBUG_TYPE "asm-printer"
 #include "X86AsmPrinter.h"
 #include "X86ATTInstPrinter.h"
 #include "X86IntelInstPrinter.h"
@@ -29,6 +27,7 @@
 #include "llvm/Type.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCSectionMachO.h"
 #include "llvm/MC/MCStreamer.h"
 #include "llvm/MC/MCSymbol.h"
@@ -37,163 +36,61 @@
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/MC/MCAsmInfo.h"
+#include "llvm/Target/Mangler.h"
 #include "llvm/Target/TargetLoweringObjectFile.h"
 #include "llvm/Target/TargetOptions.h"
 #include "llvm/Target/TargetRegistry.h"
 #include "llvm/ADT/SmallString.h"
-#include "llvm/ADT/Statistic.h"
 using namespace llvm;
 
-STATISTIC(EmittedInsts, "Number of machine instrs printed");
-
 //===----------------------------------------------------------------------===//
 // Primitive Helper Functions.
 //===----------------------------------------------------------------------===//
 
-void X86AsmPrinter::printMCInst(const MCInst *MI) {
-  if (MAI->getAssemblerDialect() == 0)
-    X86ATTInstPrinter(O, *MAI).printInstruction(MI);
-  else
-    X86IntelInstPrinter(O, *MAI).printInstruction(MI);
-}
-
 void X86AsmPrinter::PrintPICBaseSymbol() const {
   const TargetLowering *TLI = TM.getTargetLowering();
   O << *static_cast<const X86TargetLowering*>(TLI)->getPICBaseSymbol(MF,
                                                                     OutContext);
 }
 
-void X86AsmPrinter::emitFunctionHeader(const MachineFunction &MF) {
-  unsigned FnAlign = MF.getAlignment();
-  const Function *F = MF.getFunction();
+MCSymbol *X86AsmPrinter::GetGlobalValueSymbol(const GlobalValue *GV) const {
+  SmallString<60> NameStr;
+  Mang->getNameWithPrefix(NameStr, GV, false);
+  MCSymbol *Symb = OutContext.GetOrCreateSymbol(NameStr.str());
 
   if (Subtarget->isTargetCygMing()) {
-    X86COFFMachineModuleInfo &COFFMMI = 
+    X86COFFMachineModuleInfo &COFFMMI =
       MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
-    COFFMMI.DecorateCygMingName(CurrentFnSym, OutContext, F,
-                                *TM.getTargetData());
-  }
-
-  OutStreamer.SwitchSection(getObjFileLowering().SectionForGlobal(F, Mang, TM));
-  EmitAlignment(FnAlign, F);
-
-  switch (F->getLinkage()) {
-  default: llvm_unreachable("Unknown linkage type!");
-  case Function::InternalLinkage:  // Symbols default to internal.
-  case Function::PrivateLinkage:
-    break;
-  case Function::DLLExportLinkage:
-  case Function::ExternalLinkage:
-    OutStreamer.EmitSymbolAttribute(CurrentFnSym, MCSA_Global);
-    break;
-  case Function::LinkerPrivateLinkage:
-  case Function::LinkOnceAnyLinkage:
-  case Function::LinkOnceODRLinkage:
-  case Function::WeakAnyLinkage:
-  case Function::WeakODRLinkage:
-    if (Subtarget->isTargetDarwin()) {
-      OutStreamer.EmitSymbolAttribute(CurrentFnSym, MCSA_Global);
-      O << MAI->getWeakDefDirective() << *CurrentFnSym << '\n';
-    } else if (Subtarget->isTargetCygMing()) {
-      OutStreamer.EmitSymbolAttribute(CurrentFnSym, MCSA_Global);
-      // FIXME: linkonce should be a section attribute, handled by COFF Section
-      // assignment.
-      // http://sourceware.org/binutils/docs-2.20/as/Linkonce.html#Linkonce
-      O << "\t.linkonce discard\n";
-    } else {
-      O << "\t.weak\t" << *CurrentFnSym << '\n';
-    }
-    break;
-  }
+    COFFMMI.DecorateCygMingName(Symb, OutContext, GV, *TM.getTargetData());
 
-  printVisibility(CurrentFnSym, F->getVisibility());
+    // Save function name for later type emission.
+    if (const Function *F = dyn_cast<Function>(GV))
+      if (F->isDeclaration())
+        COFFMMI.addExternalFunction(Symb->getName());
 
-  if (MAI->hasDotTypeDotSizeDirective()) {
-    OutStreamer.EmitSymbolAttribute(CurrentFnSym, MCSA_ELF_TypeFunction);
-  } else if (Subtarget->isTargetCygMing()) {
-    O << "\t.def\t " << *CurrentFnSym;
-    O << ";\t.scl\t" <<
-      (F->hasInternalLinkage() ? COFF::C_STAT : COFF::C_EXT)
-      << ";\t.type\t" << (COFF::DT_FCN << COFF::N_BTSHFT)
-      << ";\t.endef\n";
   }
 
-  O << *CurrentFnSym << ':';
-  if (VerboseAsm) {
-    O.PadToColumn(MAI->getCommentColumn());
-    O << MAI->getCommentString() << ' ';
-    WriteAsOperand(O, F, /*PrintType=*/false, F->getParent());
-  }
-  O << '\n';
-
-  // Add some workaround for linkonce linkage on Cygwin\MinGW
-  if (Subtarget->isTargetCygMing() &&
-      (F->hasLinkOnceLinkage() || F->hasWeakLinkage()))
-    O << "Lllvm$workaround$fake$stub$" << *CurrentFnSym << ":\n";
+  return Symb;
 }
 
-/// runOnMachineFunction - This uses the printMachineInstruction()
-/// method to print assembly for each instruction.
+/// runOnMachineFunction - Emit the function body.
 ///
 bool X86AsmPrinter::runOnMachineFunction(MachineFunction &MF) {
-  const Function *F = MF.getFunction();
-  CallingConv::ID CC = F->getCallingConv();
-
   SetupMachineFunction(MF);
-  O << "\n\n";
 
   if (Subtarget->isTargetCOFF()) {
-    X86COFFMachineModuleInfo &COFFMMI = 
-    MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
-
-    // Populate function information map.  Don't want to populate
-    // non-stdcall or non-fastcall functions' information right now.
-    if (CC == CallingConv::X86_StdCall || CC == CallingConv::X86_FastCall)
-      COFFMMI.AddFunctionInfo(F, *MF.getInfo<X86MachineFunctionInfo>());
+    const Function *F = MF.getFunction();
+    O << "\t.def\t " << *CurrentFnSym << ";\t.scl\t" <<
+    (F->hasInternalLinkage() ? COFF::C_STAT : COFF::C_EXT)
+    << ";\t.type\t" << (COFF::DT_FCN << COFF::N_BTSHFT)
+    << ";\t.endef\n";
   }
 
-  // Print out constants referenced by the function
-  EmitConstantPool(MF.getConstantPool());
-
-  // Print the 'header' of function
-  emitFunctionHeader(MF);
-
-  // Emit pre-function debug and/or EH information.
-  if (MAI->doesSupportDebugInformation() || MAI->doesSupportExceptionHandling())
-    DW->BeginFunction(&MF);
-
-  // Print out code for the function.
-  bool hasAnyRealCode = false;
-  for (MachineFunction::const_iterator I = MF.begin(), E = MF.end();
-       I != E; ++I) {
-    // Print a label for the basic block.
-    EmitBasicBlockStart(I);
-    for (MachineBasicBlock::const_iterator II = I->begin(), IE = I->end();
-         II != IE; ++II) {
-      // Print the assembly for the instruction.
-      if (!II->isLabel())
-        hasAnyRealCode = true;
-      printMachineInstruction(II);
-    }
-  }
-
-  if (Subtarget->isTargetDarwin() && !hasAnyRealCode) {
-    // If the function is empty, then we need to emit *something*. Otherwise,
-    // the function's label might be associated with something that it wasn't
-    // meant to be associated with. We emit a noop in this situation.
-    // We are assuming inline asms are code.
-    O << "\tnop\n";
-  }
-
-  if (MAI->hasDotTypeDotSizeDirective())
-    O << "\t.size\t" << *CurrentFnSym << ", .-" << *CurrentFnSym << '\n';
+  // Have common code print out the function header with linkage info etc.
+  EmitFunctionHeader();
 
-  // Emit post-function debug information.
-  if (MAI->doesSupportDebugInformation() || MAI->doesSupportExceptionHandling())
-    DW->EndFunction(&MF);
-
-  // Print out jump tables referenced by the function.
-  EmitJumpTableInfo(MF);
+  // Emit the rest of the function body.
+  EmitFunctionBody();
 
   // We didn't modify anything.
   return false;
@@ -225,12 +122,6 @@ void X86AsmPrinter::printSymbolOperand(const MachineOperand &MO) {
     else
       GVSym = GetGlobalValueSymbol(GV);
 
-    if (Subtarget->isTargetCygMing()) {
-      X86COFFMachineModuleInfo &COFFMMI =
-        MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
-      COFFMMI.DecorateCygMingName(GVSym, OutContext, GV, *TM.getTargetData());
-    }
-    
     // Handle dllimport linkage.
     if (MO.getTargetFlags() == X86II::MO_DLLIMPORT)
       GVSym = OutContext.GetOrCreateSymbol(Twine("__imp_") + GVSym->getName());
@@ -239,20 +130,20 @@ void X86AsmPrinter::printSymbolOperand(const MachineOperand &MO) {
         MO.getTargetFlags() == X86II::MO_DARWIN_NONLAZY_PIC_BASE) {
       MCSymbol *Sym = GetSymbolWithGlobalValueBase(GV, "$non_lazy_ptr");
       
-      const MCSymbol *&StubSym = 
+      MCSymbol *&StubSym = 
         MMI->getObjFileInfo<MachineModuleInfoMachO>().getGVStubEntry(Sym);
       if (StubSym == 0)
         StubSym = GetGlobalValueSymbol(GV);
       
     } else if (MO.getTargetFlags() == X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE){
       MCSymbol *Sym = GetSymbolWithGlobalValueBase(GV, "$non_lazy_ptr");
-      const MCSymbol *&StubSym =
+      MCSymbol *&StubSym =
         MMI->getObjFileInfo<MachineModuleInfoMachO>().getHiddenGVStubEntry(Sym);
       if (StubSym == 0)
         StubSym = GetGlobalValueSymbol(GV);
     } else if (MO.getTargetFlags() == X86II::MO_DARWIN_STUB) {
       MCSymbol *Sym = GetSymbolWithGlobalValueBase(GV, "$stub");
-      const MCSymbol *&StubSym =
+      MCSymbol *&StubSym =
         MMI->getObjFileInfo<MachineModuleInfoMachO>().getFnStubEntry(Sym);
       if (StubSym == 0)
         StubSym = GetGlobalValueSymbol(GV);
@@ -274,8 +165,8 @@ void X86AsmPrinter::printSymbolOperand(const MachineOperand &MO) {
       TempNameStr += StringRef(MO.getSymbolName());
       TempNameStr += StringRef("$stub");
       
-      const MCSymbol *Sym = GetExternalSymbolSymbol(TempNameStr.str());
-      const MCSymbol *&StubSym =
+      MCSymbol *Sym = GetExternalSymbolSymbol(TempNameStr.str());
+      MCSymbol *&StubSym =
         MMI->getObjFileInfo<MachineModuleInfoMachO>().getFnStubEntry(Sym);
       if (StubSym == 0) {
         TempNameStr.erase(TempNameStr.end()-5, TempNameStr.end());
@@ -586,24 +477,6 @@ bool X86AsmPrinter::PrintAsmMemoryOperand(const MachineInstr *MI,
 }
 
 
-
-/// printMachineInstruction -- Print out a single X86 LLVM instruction MI in
-/// AT&T syntax to the current output stream.
-///
-void X86AsmPrinter::printMachineInstruction(const MachineInstr *MI) {
-  ++EmittedInsts;
-
-  processDebugLoc(MI, true);
-  
-  printInstructionThroughMCStreamer(MI);
-  
-  if (VerboseAsm)
-    EmitComments(*MI);
-  O << '\n';
-
-  processDebugLoc(MI, false);
-}
-
 void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
   if (Subtarget->isTargetDarwin()) {
     // All darwin targets use mach-o.
@@ -627,14 +500,17 @@ void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
       OutStreamer.SwitchSection(TheSection);
 
       for (unsigned i = 0, e = Stubs.size(); i != e; ++i) {
-        O << *Stubs[i].first << ":\n";
-        // Get the MCSymbol without the $stub suffix.
-        O << "\t.indirect_symbol " << *Stubs[i].second;
-        O << "\n\thlt ; hlt ; hlt ; hlt ; hlt\n";
+        // L_foo$stub:
+        OutStreamer.EmitLabel(Stubs[i].first);
+        //   .indirect_symbol _foo
+        OutStreamer.EmitSymbolAttribute(Stubs[i].second, MCSA_IndirectSymbol);
+        // hlt; hlt; hlt; hlt; hlt     hlt = 0xf4 = -12.
+        const char HltInsts[] = { -12, -12, -12, -12, -12 };
+        OutStreamer.EmitBytes(StringRef(HltInsts, 5), 0/*addrspace*/);
       }
-      O << '\n';
       
       Stubs.clear();
+      OutStreamer.AddBlankLine();
     }
 
     // Output stubs for external and common global variables.
@@ -647,10 +523,15 @@ void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
       OutStreamer.SwitchSection(TheSection);
 
       for (unsigned i = 0, e = Stubs.size(); i != e; ++i) {
-        O << *Stubs[i].first << ":\n\t.indirect_symbol " << *Stubs[i].second;
-        O << "\n\t.long\t0\n";
+        // L_foo$non_lazy_ptr:
+        OutStreamer.EmitLabel(Stubs[i].first);
+        // .indirect_symbol _foo
+        OutStreamer.EmitSymbolAttribute(Stubs[i].second, MCSA_IndirectSymbol);
+        // .long 0
+        OutStreamer.EmitIntValue(0, 4/*size*/, 0/*addrspace*/);
       }
       Stubs.clear();
+      OutStreamer.AddBlankLine();
     }
 
     Stubs = MMIMacho.GetHiddenGVStubList();
@@ -659,10 +540,15 @@ void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
       EmitAlignment(2);
 
       for (unsigned i = 0, e = Stubs.size(); i != e; ++i) {
-        O << *Stubs[i].first << ":\n" << MAI->getData32bitsDirective();
-        O << *Stubs[i].second << '\n';
+        // L_foo$non_lazy_ptr:
+        OutStreamer.EmitLabel(Stubs[i].first);
+        // .long _foo
+        OutStreamer.EmitValue(MCSymbolRefExpr::Create(Stubs[i].second,
+                                                      OutContext),
+                              4/*size*/, 0/*addrspace*/);
       }
       Stubs.clear();
+      OutStreamer.AddBlankLine();
     }
 
     // Funny Darwin hack: This flag tells the linker that no global symbols
@@ -696,7 +582,6 @@ void X86AsmPrinter::EmitEndOfAsmFile(Module &M) {
       for (Module::const_iterator I = M.begin(), E = M.end(); I != E; ++I)
         if (I->hasDLLExportLinkage()) {
           MCSymbol *Sym = GetGlobalValueSymbol(I);
-          COFFMMI.DecorateCygMingName(Sym, OutContext, I, *TM.getTargetData());
           DLLExportedFns.push_back(Sym);
         }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.h
index b4d88e7..039214a 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86AsmPrinter.h
@@ -36,8 +36,9 @@ class VISIBILITY_HIDDEN X86AsmPrinter : public AsmPrinter {
   const X86Subtarget *Subtarget;
  public:
   explicit X86AsmPrinter(formatted_raw_ostream &O, TargetMachine &TM,
-                            const MCAsmInfo *T, bool V)
-    : AsmPrinter(O, TM, T, V) {
+                         MCContext &Ctx, MCStreamer &Streamer,
+                         const MCAsmInfo *T)
+    : AsmPrinter(O, TM, Ctx, Streamer, T) {
     Subtarget = &TM.getSubtarget<X86Subtarget>();
   }
 
@@ -57,14 +58,10 @@ class VISIBILITY_HIDDEN X86AsmPrinter : public AsmPrinter {
   
   virtual void EmitEndOfAsmFile(Module &M);
   
-  void printInstructionThroughMCStreamer(const MachineInstr *MI);
-
-
-  void printMCInst(const MCInst *MI);
-
-  void printSymbolOperand(const MachineOperand &MO);
-  
+  virtual void EmitInstruction(const MachineInstr *MI);
   
+  void printSymbolOperand(const MachineOperand &MO);
+  virtual MCSymbol *GetGlobalValueSymbol(const GlobalValue *GV) const;
 
   // These methods are used by the tablegen'erated instruction printer.
   void printOperand(const MachineInstr *MI, unsigned OpNo,
@@ -130,9 +127,6 @@ class VISIBILITY_HIDDEN X86AsmPrinter : public AsmPrinter {
   void PrintPICBaseSymbol() const;
   
   bool runOnMachineFunction(MachineFunction &F);
-
-  void emitFunctionHeader(const MachineFunction &MF);
-
 };
 
 } // end namespace llvm
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.cpp
index 4efb529..610beb5 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.cpp
@@ -24,11 +24,14 @@ using namespace llvm;
 
 // Include the auto-generated portion of the assembly writer.
 #define MachineInstr MCInst
-#define NO_ASM_WRITER_BOILERPLATE
+#define GET_INSTRUCTION_NAME
 #include "X86GenAsmWriter1.inc"
 #undef MachineInstr
 
 void X86IntelInstPrinter::printInst(const MCInst *MI) { printInstruction(MI); }
+StringRef X86IntelInstPrinter::getOpcodeName(unsigned Opcode) const {
+  return getInstructionName(Opcode);
+}
 
 void X86IntelInstPrinter::printSSECC(const MCInst *MI, unsigned Op) {
   switch (MI->getOperand(Op).getImm()) {
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.h
index 1976177..545bf84 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86IntelInstPrinter.h
@@ -26,10 +26,12 @@ public:
     : MCInstPrinter(O, MAI) {}
   
   virtual void printInst(const MCInst *MI);
+  virtual StringRef getOpcodeName(unsigned Opcode) const;
   
   // Autogenerated by tblgen.
   void printInstruction(const MCInst *MI);
   static const char *getRegisterName(unsigned RegNo);
+  static const char *getInstructionName(unsigned Opcode);
 
 
   void printOperand(const MCInst *MI, unsigned OpNo,
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
index b6a3581..fa8d13d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.cpp
@@ -14,8 +14,9 @@
 
 #include "X86MCInstLower.h"
 #include "X86AsmPrinter.h"
-#include "X86MCAsmInfo.h"
 #include "X86COFFMachineModuleInfo.h"
+#include "X86MCAsmInfo.h"
+#include "X86MCTargetExpr.h"
 #include "llvm/Analysis/DebugInfo.h"
 #include "llvm/CodeGen/MachineModuleInfoImpls.h"
 #include "llvm/MC/MCContext.h"
@@ -25,6 +26,7 @@
 #include "llvm/Target/Mangler.h"
 #include "llvm/Support/FormattedStream.h"
 #include "llvm/ADT/SmallString.h"
+#include "llvm/Type.h"
 using namespace llvm;
 
 
@@ -44,33 +46,40 @@ MCSymbol *X86MCInstLower::GetPICBaseSymbol() const {
     getPICBaseSymbol(AsmPrinter.MF, Ctx);
 }
 
-/// LowerGlobalAddressOperand - Lower an MO_GlobalAddress operand to an
-/// MCOperand.
+/// GetSymbolFromOperand - Lower an MO_GlobalAddress or MO_ExternalSymbol
+/// operand to an MCSymbol.
 MCSymbol *X86MCInstLower::
-GetGlobalAddressSymbol(const MachineOperand &MO) const {
-  const GlobalValue *GV = MO.getGlobal();
-  
-  bool isImplicitlyPrivate = false;
-  if (MO.getTargetFlags() == X86II::MO_DARWIN_STUB ||
-      MO.getTargetFlags() == X86II::MO_DARWIN_NONLAZY ||
-      MO.getTargetFlags() == X86II::MO_DARWIN_NONLAZY_PIC_BASE ||
-      MO.getTargetFlags() == X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE)
-    isImplicitlyPrivate = true;
-  
+GetSymbolFromOperand(const MachineOperand &MO) const {
+  assert((MO.isGlobal() || MO.isSymbol()) && "Isn't a symbol reference");
+
   SmallString<128> Name;
-  Mang->getNameWithPrefix(Name, GV, isImplicitlyPrivate);
   
-  if (getSubtarget().isTargetCygMing()) {
-    X86COFFMachineModuleInfo &COFFMMI = 
-      AsmPrinter.MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
-    COFFMMI.DecorateCygMingName(Name, GV, *AsmPrinter.TM.getTargetData());
-  }
+  if (MO.isGlobal()) {
+    bool isImplicitlyPrivate = false;
+    if (MO.getTargetFlags() == X86II::MO_DARWIN_STUB ||
+        MO.getTargetFlags() == X86II::MO_DARWIN_NONLAZY ||
+        MO.getTargetFlags() == X86II::MO_DARWIN_NONLAZY_PIC_BASE ||
+        MO.getTargetFlags() == X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE)
+      isImplicitlyPrivate = true;
+    
+    const GlobalValue *GV = MO.getGlobal();
+    Mang->getNameWithPrefix(Name, GV, isImplicitlyPrivate);
   
+    if (getSubtarget().isTargetCygMing()) {
+      X86COFFMachineModuleInfo &COFFMMI = 
+        AsmPrinter.MMI->getObjFileInfo<X86COFFMachineModuleInfo>();
+      COFFMMI.DecorateCygMingName(Name, GV, *AsmPrinter.TM.getTargetData());
+    }
+  } else {
+    assert(MO.isSymbol());
+    Name += AsmPrinter.MAI->getGlobalPrefix();
+    Name += MO.getSymbolName();
+  }
+
+  // If the target flags on the operand changes the name of the symbol, do that
+  // before we return the symbol.
   switch (MO.getTargetFlags()) {
-  default: llvm_unreachable("Unknown target flag on GV operand");
-  case X86II::MO_NO_FLAG:                // No flag.
-  case X86II::MO_PIC_BASE_OFFSET:        // Doesn't modify symbol name.
-    break;
+  default: break;
   case X86II::MO_DLLIMPORT: {
     // Handle dllimport linkage.
     const char *Prefix = "__imp_";
@@ -82,190 +91,72 @@ GetGlobalAddressSymbol(const MachineOperand &MO) const {
     Name += "$non_lazy_ptr";
     MCSymbol *Sym = Ctx.GetOrCreateSymbol(Name.str());
 
-    const MCSymbol *&StubSym = getMachOMMI().getGVStubEntry(Sym);
-    if (StubSym == 0)
-      StubSym = AsmPrinter.GetGlobalValueSymbol(GV);
+    MCSymbol *&StubSym = getMachOMMI().getGVStubEntry(Sym);
+    if (StubSym == 0) {
+      assert(MO.isGlobal() && "Extern symbol not handled yet");
+      StubSym = AsmPrinter.GetGlobalValueSymbol(MO.getGlobal());
+    }
     return Sym;
   }
   case X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE: {
     Name += "$non_lazy_ptr";
     MCSymbol *Sym = Ctx.GetOrCreateSymbol(Name.str());
-    const MCSymbol *&StubSym = getMachOMMI().getHiddenGVStubEntry(Sym);
-    if (StubSym == 0)
-      StubSym = AsmPrinter.GetGlobalValueSymbol(GV);
+    MCSymbol *&StubSym = getMachOMMI().getHiddenGVStubEntry(Sym);
+    if (StubSym == 0) {
+      assert(MO.isGlobal() && "Extern symbol not handled yet");
+      StubSym = AsmPrinter.GetGlobalValueSymbol(MO.getGlobal());
+    }
     return Sym;
   }
   case X86II::MO_DARWIN_STUB: {
     Name += "$stub";
     MCSymbol *Sym = Ctx.GetOrCreateSymbol(Name.str());
-    const MCSymbol *&StubSym = getMachOMMI().getFnStubEntry(Sym);
-    if (StubSym == 0)
-      StubSym = AsmPrinter.GetGlobalValueSymbol(GV);
-    return Sym;
-  }
-  // FIXME: These probably should be a modifier on the symbol or something??
-  case X86II::MO_TLSGD:     Name += "@TLSGD";     break;
-  case X86II::MO_GOTTPOFF:  Name += "@GOTTPOFF";  break;
-  case X86II::MO_INDNTPOFF: Name += "@INDNTPOFF"; break;
-  case X86II::MO_TPOFF:     Name += "@TPOFF";     break;
-  case X86II::MO_NTPOFF:    Name += "@NTPOFF";    break;
-  case X86II::MO_GOTPCREL:  Name += "@GOTPCREL";  break;
-  case X86II::MO_GOT:       Name += "@GOT";       break;
-  case X86II::MO_GOTOFF:    Name += "@GOTOFF";    break;
-  case X86II::MO_PLT:       Name += "@PLT";       break;
-  }
-  
-  return Ctx.GetOrCreateSymbol(Name.str());
-}
-
-MCSymbol *X86MCInstLower::
-GetExternalSymbolSymbol(const MachineOperand &MO) const {
-  SmallString<128> Name;
-  Name += AsmPrinter.MAI->getGlobalPrefix();
-  Name += MO.getSymbolName();
-  
-  switch (MO.getTargetFlags()) {
-  default: llvm_unreachable("Unknown target flag on GV operand");
-  case X86II::MO_NO_FLAG:                // No flag.
-  case X86II::MO_GOT_ABSOLUTE_ADDRESS:   // Doesn't modify symbol name.
-  case X86II::MO_PIC_BASE_OFFSET:        // Doesn't modify symbol name.
-    break;
-  case X86II::MO_DLLIMPORT: {
-    // Handle dllimport linkage.
-    const char *Prefix = "__imp_";
-    Name.insert(Name.begin(), Prefix, Prefix+strlen(Prefix));
-    break;
-  }
-  case X86II::MO_DARWIN_STUB: {
-    Name += "$stub";
-    MCSymbol *Sym = Ctx.GetOrCreateSymbol(Name.str());
-    const MCSymbol *&StubSym = getMachOMMI().getFnStubEntry(Sym);
-
-    if (StubSym == 0) {
+    MCSymbol *&StubSym = getMachOMMI().getFnStubEntry(Sym);
+    if (StubSym)
+      return Sym;
+    
+    if (MO.isGlobal()) {
+      StubSym = AsmPrinter.GetGlobalValueSymbol(MO.getGlobal());
+    } else {
       Name.erase(Name.end()-5, Name.end());
       StubSym = Ctx.GetOrCreateSymbol(Name.str());
     }
     return Sym;
   }
-  // FIXME: These probably should be a modifier on the symbol or something??
-  case X86II::MO_TLSGD:     Name += "@TLSGD";     break;
-  case X86II::MO_GOTTPOFF:  Name += "@GOTTPOFF";  break;
-  case X86II::MO_INDNTPOFF: Name += "@INDNTPOFF"; break;
-  case X86II::MO_TPOFF:     Name += "@TPOFF";     break;
-  case X86II::MO_NTPOFF:    Name += "@NTPOFF";    break;
-  case X86II::MO_GOTPCREL:  Name += "@GOTPCREL";  break;
-  case X86II::MO_GOT:       Name += "@GOT";       break;
-  case X86II::MO_GOTOFF:    Name += "@GOTOFF";    break;
-  case X86II::MO_PLT:       Name += "@PLT";       break;
   }
-  
-  return Ctx.GetOrCreateSymbol(Name.str());
-}
 
-MCSymbol *X86MCInstLower::GetJumpTableSymbol(const MachineOperand &MO) const {
-  SmallString<256> Name;
-  // FIXME: Use AsmPrinter.GetJTISymbol.  @TLSGD shouldn't be part of the symbol
-  // name!
-  raw_svector_ostream(Name) << AsmPrinter.MAI->getPrivateGlobalPrefix() << "JTI"
-    << AsmPrinter.getFunctionNumber() << '_' << MO.getIndex();
-  
-  switch (MO.getTargetFlags()) {
-  default:
-    llvm_unreachable("Unknown target flag on GV operand");
-  case X86II::MO_NO_FLAG:    // No flag.
-  case X86II::MO_PIC_BASE_OFFSET:
-  case X86II::MO_DARWIN_NONLAZY_PIC_BASE:
-  case X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE:
-    break;
-    // FIXME: These probably should be a modifier on the symbol or something??
-  case X86II::MO_TLSGD:     Name += "@TLSGD";     break;
-  case X86II::MO_GOTTPOFF:  Name += "@GOTTPOFF";  break;
-  case X86II::MO_INDNTPOFF: Name += "@INDNTPOFF"; break;
-  case X86II::MO_TPOFF:     Name += "@TPOFF";     break;
-  case X86II::MO_NTPOFF:    Name += "@NTPOFF";    break;
-  case X86II::MO_GOTPCREL:  Name += "@GOTPCREL";  break;
-  case X86II::MO_GOT:       Name += "@GOT";       break;
-  case X86II::MO_GOTOFF:    Name += "@GOTOFF";    break;
-  case X86II::MO_PLT:       Name += "@PLT";       break;
-  }
-  
-  // Create a symbol for the name.
-  return Ctx.GetOrCreateSymbol(Name.str());
-}
-
-
-MCSymbol *X86MCInstLower::
-GetConstantPoolIndexSymbol(const MachineOperand &MO) const {
-  SmallString<256> Name;
-  // FIXME: USe AsmPrinter.GetCPISymbol.  @TLSGD shouldn't be part of the symbol
-  // name!
-  raw_svector_ostream(Name) << AsmPrinter.MAI->getPrivateGlobalPrefix() << "CPI"
-    << AsmPrinter.getFunctionNumber() << '_' << MO.getIndex();
-  
-  switch (MO.getTargetFlags()) {
-  default:
-    llvm_unreachable("Unknown target flag on GV operand");
-  case X86II::MO_NO_FLAG:    // No flag.
-  case X86II::MO_PIC_BASE_OFFSET:
-  case X86II::MO_DARWIN_NONLAZY_PIC_BASE:
-  case X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE:
-    break;
-    // FIXME: These probably should be a modifier on the symbol or something??
-  case X86II::MO_TLSGD:     Name += "@TLSGD";     break;
-  case X86II::MO_GOTTPOFF:  Name += "@GOTTPOFF";  break;
-  case X86II::MO_INDNTPOFF: Name += "@INDNTPOFF"; break;
-  case X86II::MO_TPOFF:     Name += "@TPOFF";     break;
-  case X86II::MO_NTPOFF:    Name += "@NTPOFF";    break;
-  case X86II::MO_GOTPCREL:  Name += "@GOTPCREL";  break;
-  case X86II::MO_GOT:       Name += "@GOT";       break;
-  case X86II::MO_GOTOFF:    Name += "@GOTOFF";    break;
-  case X86II::MO_PLT:       Name += "@PLT";       break;
-  }
-  
-  // Create a symbol for the name.
   return Ctx.GetOrCreateSymbol(Name.str());
 }
 
-MCSymbol *X86MCInstLower::
-GetBlockAddressSymbol(const MachineOperand &MO) const {
-  const char *Suffix = "";
-  switch (MO.getTargetFlags()) {
-  default: llvm_unreachable("Unknown target flag on BA operand");
-  case X86II::MO_NO_FLAG:         break; // No flag.
-  case X86II::MO_PIC_BASE_OFFSET: break; // Doesn't modify symbol name.
-  case X86II::MO_GOTOFF: Suffix = "@GOTOFF"; break;
-  }
-
-  return AsmPrinter.GetBlockAddressSymbol(MO.getBlockAddress(), Suffix);
-}
-
 MCOperand X86MCInstLower::LowerSymbolOperand(const MachineOperand &MO,
                                              MCSymbol *Sym) const {
   // FIXME: We would like an efficient form for this, so we don't have to do a
   // lot of extra uniquing.
-  const MCExpr *Expr = MCSymbolRefExpr::Create(Sym, Ctx);
+  const MCExpr *Expr = 0;
+  X86MCTargetExpr::VariantKind RefKind = X86MCTargetExpr::Invalid;
   
   switch (MO.getTargetFlags()) {
   default: llvm_unreachable("Unknown target flag on GV operand");
   case X86II::MO_NO_FLAG:    // No flag.
-      
   // These affect the name of the symbol, not any suffix.
   case X86II::MO_DARWIN_NONLAZY:
   case X86II::MO_DLLIMPORT:
   case X86II::MO_DARWIN_STUB:
-  case X86II::MO_TLSGD:
-  case X86II::MO_GOTTPOFF:
-  case X86II::MO_INDNTPOFF:
-  case X86II::MO_TPOFF:
-  case X86II::MO_NTPOFF:
-  case X86II::MO_GOTPCREL:
-  case X86II::MO_GOT:
-  case X86II::MO_GOTOFF:
-  case X86II::MO_PLT:
     break;
+      
+  case X86II::MO_TLSGD:     RefKind = X86MCTargetExpr::TLSGD; break;
+  case X86II::MO_GOTTPOFF:  RefKind = X86MCTargetExpr::GOTTPOFF; break;
+  case X86II::MO_INDNTPOFF: RefKind = X86MCTargetExpr::INDNTPOFF; break;
+  case X86II::MO_TPOFF:     RefKind = X86MCTargetExpr::TPOFF; break;
+  case X86II::MO_NTPOFF:    RefKind = X86MCTargetExpr::NTPOFF; break;
+  case X86II::MO_GOTPCREL:  RefKind = X86MCTargetExpr::GOTPCREL; break;
+  case X86II::MO_GOT:       RefKind = X86MCTargetExpr::GOT; break;
+  case X86II::MO_GOTOFF:    RefKind = X86MCTargetExpr::GOTOFF; break;
+  case X86II::MO_PLT:       RefKind = X86MCTargetExpr::PLT; break;
   case X86II::MO_PIC_BASE_OFFSET:
   case X86II::MO_DARWIN_NONLAZY_PIC_BASE:
   case X86II::MO_DARWIN_HIDDEN_NONLAZY_PIC_BASE:
+    Expr = MCSymbolRefExpr::Create(Sym, Ctx);
     // Subtract the pic base.
     Expr = MCBinaryExpr::CreateSub(Expr, 
                                MCSymbolRefExpr::Create(GetPICBaseSymbol(), Ctx),
@@ -273,6 +164,13 @@ MCOperand X86MCInstLower::LowerSymbolOperand(const MachineOperand &MO,
     break;
   }
   
+  if (Expr == 0) {
+    if (RefKind == X86MCTargetExpr::Invalid)
+      Expr = MCSymbolRefExpr::Create(Sym, Ctx);
+    else
+      Expr = X86MCTargetExpr::Create(Sym, RefKind, Ctx);
+  }
+  
   if (!MO.isJTI() && MO.getOffset())
     Expr = MCBinaryExpr::CreateAdd(Expr,
                                    MCConstantExpr::Create(MO.getOffset(), Ctx),
@@ -301,6 +199,17 @@ static void lower_lea64_32mem(MCInst *MI, unsigned OpNo) {
   }
 }
 
+/// LowerSubReg32_Op0 - Things like MOVZX16rr8 -> MOVZX32rr8.
+static void LowerSubReg32_Op0(MCInst &OutMI, unsigned NewOpc) {
+  OutMI.setOpcode(NewOpc);
+  lower_subreg32(&OutMI, 0);
+}
+/// LowerUnaryToTwoAddr - R = setb   -> R = sbb R, R
+static void LowerUnaryToTwoAddr(MCInst &OutMI, unsigned NewOpc) {
+  OutMI.setOpcode(NewOpc);
+  OutMI.addOperand(OutMI.getOperand(0));
+  OutMI.addOperand(OutMI.getOperand(0));
+}
 
 
 void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
@@ -327,19 +236,20 @@ void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
                        MO.getMBB()->getSymbol(Ctx), Ctx));
       break;
     case MachineOperand::MO_GlobalAddress:
-      MCOp = LowerSymbolOperand(MO, GetGlobalAddressSymbol(MO));
+      MCOp = LowerSymbolOperand(MO, GetSymbolFromOperand(MO));
       break;
     case MachineOperand::MO_ExternalSymbol:
-      MCOp = LowerSymbolOperand(MO, GetExternalSymbolSymbol(MO));
+      MCOp = LowerSymbolOperand(MO, GetSymbolFromOperand(MO));
       break;
     case MachineOperand::MO_JumpTableIndex:
-      MCOp = LowerSymbolOperand(MO, GetJumpTableSymbol(MO));
+      MCOp = LowerSymbolOperand(MO, AsmPrinter.GetJTISymbol(MO.getIndex()));
       break;
     case MachineOperand::MO_ConstantPoolIndex:
-      MCOp = LowerSymbolOperand(MO, GetConstantPoolIndexSymbol(MO));
+      MCOp = LowerSymbolOperand(MO, AsmPrinter.GetCPISymbol(MO.getIndex()));
       break;
     case MachineOperand::MO_BlockAddress:
-      MCOp = LowerSymbolOperand(MO, GetBlockAddressSymbol(MO));
+      MCOp = LowerSymbolOperand(MO,
+                        AsmPrinter.GetBlockAddressSymbol(MO.getBlockAddress()));
       break;
     }
     
@@ -351,72 +261,48 @@ void X86MCInstLower::Lower(const MachineInstr *MI, MCInst &OutMI) const {
   case X86::LEA64_32r: // Handle 'subreg rewriting' for the lea64_32mem operand.
     lower_lea64_32mem(&OutMI, 1);
     break;
-  case X86::MOVZX16rr8:
-    OutMI.setOpcode(X86::MOVZX32rr8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX16rm8:
-    OutMI.setOpcode(X86::MOVZX32rm8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVSX16rr8:
-    OutMI.setOpcode(X86::MOVSX32rr8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVSX16rm8:
-    OutMI.setOpcode(X86::MOVSX32rm8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rr32:
-    OutMI.setOpcode(X86::MOV32rr);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rm32:
-    OutMI.setOpcode(X86::MOV32rm);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOV64ri64i32:
-    OutMI.setOpcode(X86::MOV32ri);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rr8:
-    OutMI.setOpcode(X86::MOVZX32rr8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rm8:
-    OutMI.setOpcode(X86::MOVZX32rm8);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rr16:
-    OutMI.setOpcode(X86::MOVZX32rr16);
-    lower_subreg32(&OutMI, 0);
-    break;
-  case X86::MOVZX64rm16:
-    OutMI.setOpcode(X86::MOVZX32rm16);
-    lower_subreg32(&OutMI, 0);
-    break;
+  case X86::MOVZX16rr8:   LowerSubReg32_Op0(OutMI, X86::MOVZX32rr8); break;
+  case X86::MOVZX16rm8:   LowerSubReg32_Op0(OutMI, X86::MOVZX32rm8); break;
+  case X86::MOVSX16rr8:   LowerSubReg32_Op0(OutMI, X86::MOVSX32rr8); break;
+  case X86::MOVSX16rm8:   LowerSubReg32_Op0(OutMI, X86::MOVSX32rm8); break;
+  case X86::MOVZX64rr32:  LowerSubReg32_Op0(OutMI, X86::MOV32rr); break;
+  case X86::MOVZX64rm32:  LowerSubReg32_Op0(OutMI, X86::MOV32rm); break;
+  case X86::MOV64ri64i32: LowerSubReg32_Op0(OutMI, X86::MOV32ri); break;
+  case X86::MOVZX64rr8:   LowerSubReg32_Op0(OutMI, X86::MOVZX32rr8); break;
+  case X86::MOVZX64rm8:   LowerSubReg32_Op0(OutMI, X86::MOVZX32rm8); break;
+  case X86::MOVZX64rr16:  LowerSubReg32_Op0(OutMI, X86::MOVZX32rr16); break;
+  case X86::MOVZX64rm16:  LowerSubReg32_Op0(OutMI, X86::MOVZX32rm16); break;
+  case X86::SETB_C8r:     LowerUnaryToTwoAddr(OutMI, X86::SBB8rr); break;
+  case X86::SETB_C16r:    LowerUnaryToTwoAddr(OutMI, X86::SBB16rr); break;
+  case X86::SETB_C32r:    LowerUnaryToTwoAddr(OutMI, X86::SBB32rr); break;
+  case X86::SETB_C64r:    LowerUnaryToTwoAddr(OutMI, X86::SBB64rr); break;
+  case X86::MOV8r0:       LowerUnaryToTwoAddr(OutMI, X86::XOR8rr); break;
+  case X86::MOV32r0:      LowerUnaryToTwoAddr(OutMI, X86::XOR32rr); break;
+  case X86::MMX_V_SET0:   LowerUnaryToTwoAddr(OutMI, X86::MMX_PXORrr); break;
+  case X86::MMX_V_SETALLONES:
+    LowerUnaryToTwoAddr(OutMI, X86::MMX_PCMPEQDrr); break;
+  case X86::FsFLD0SS:     LowerUnaryToTwoAddr(OutMI, X86::PXORrr); break;
+  case X86::FsFLD0SD:     LowerUnaryToTwoAddr(OutMI, X86::PXORrr); break;
+  case X86::V_SET0:       LowerUnaryToTwoAddr(OutMI, X86::XORPSrr); break;
+  case X86::V_SETALLONES: LowerUnaryToTwoAddr(OutMI, X86::PCMPEQDrr); break;
+
   case X86::MOV16r0:
-    OutMI.setOpcode(X86::MOV32r0);
-    lower_subreg32(&OutMI, 0);
+    LowerSubReg32_Op0(OutMI, X86::MOV32r0);   // MOV16r0 -> MOV32r0
+    LowerUnaryToTwoAddr(OutMI, X86::XOR32rr); // MOV32r0 -> XOR32rr
     break;
   case X86::MOV64r0:
-    OutMI.setOpcode(X86::MOV32r0);
-    lower_subreg32(&OutMI, 0);
+    LowerSubReg32_Op0(OutMI, X86::MOV32r0);   // MOV64r0 -> MOV32r0
+    LowerUnaryToTwoAddr(OutMI, X86::XOR32rr); // MOV32r0 -> XOR32rr
     break;
   }
 }
 
 
 
-void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
+void X86AsmPrinter::EmitInstruction(const MachineInstr *MI) {
   X86MCInstLower MCInstLowering(OutContext, Mang, *this);
   switch (MI->getOpcode()) {
-  case TargetInstrInfo::DBG_LABEL:
-  case TargetInstrInfo::EH_LABEL:
-  case TargetInstrInfo::GC_LABEL:
-    printLabel(MI);
-    return;
-  case TargetInstrInfo::DEBUG_VALUE: {
+  case TargetOpcode::DBG_VALUE: {
     // FIXME: if this is implemented for another target before it goes
     // away completely, the common part should be moved into AsmPrinter.
     if (!VerboseAsm)
@@ -428,10 +314,35 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     O << V.getName();
     O << " <- ";
     if (NOps==3) {
-      // Register or immediate value
+      // Register or immediate value. Register 0 means undef.
       assert(MI->getOperand(0).getType()==MachineOperand::MO_Register ||
-             MI->getOperand(0).getType()==MachineOperand::MO_Immediate);
-      printOperand(MI, 0);
+             MI->getOperand(0).getType()==MachineOperand::MO_Immediate ||
+             MI->getOperand(0).getType()==MachineOperand::MO_FPImmediate);
+      if (MI->getOperand(0).getType()==MachineOperand::MO_Register &&
+          MI->getOperand(0).getReg()==0) {
+        // Suppress offset in this case, it is not meaningful.
+        O << "undef";
+        OutStreamer.AddBlankLine();
+        return;
+      } else if (MI->getOperand(0).getType()==MachineOperand::MO_FPImmediate) {
+        // This is more naturally done in printOperand, but since the only use
+        // of such an operand is in this comment and that is temporary (and it's
+        // ugly), we prefer to keep this localized.
+        // The include of Type.h may be removable when this code is.
+        if (MI->getOperand(0).getFPImm()->getType()->isFloatTy() ||
+            MI->getOperand(0).getFPImm()->getType()->isDoubleTy())
+          MI->getOperand(0).print(O, &TM);
+        else {
+          // There is no good way to print long double.  Convert a copy to
+          // double.  Ah well, it's only a comment.
+          bool ignored;
+          APFloat APF = APFloat(MI->getOperand(0).getFPImm()->getValueAPF());
+          APF.convert(APFloat::IEEEdouble, APFloat::rmNearestTiesToEven,
+                      &ignored);
+          O << "(long double) " << APF.convertToDouble();
+        }
+      } else
+        printOperand(MI, 0);
     } else {
       // Frame address.  Currently handles register +- offset only.
       assert(MI->getOperand(0).getType()==MachineOperand::MO_Register);
@@ -440,17 +351,9 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     }
     O << "+";
     printOperand(MI, NOps-2);
+    OutStreamer.AddBlankLine();
     return;
   }
-  case TargetInstrInfo::INLINEASM:
-    printInlineAsm(MI);
-    return;
-  case TargetInstrInfo::IMPLICIT_DEF:
-    printImplicitDef(MI);
-    return;
-  case TargetInstrInfo::KILL:
-    printKill(MI);
-    return;
   case X86::MOVPC32r: {
     MCInst TmpInst;
     // This is a pseudo op for a two instruction sequence with a label, which
@@ -466,8 +369,7 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     // lot of extra uniquing.
     TmpInst.addOperand(MCOperand::CreateExpr(MCSymbolRefExpr::Create(PICBase,
                                                                  OutContext)));
-    printMCInst(&TmpInst);
-    O << '\n';
+    OutStreamer.EmitInstruction(TmpInst);
     
     // Emit the label.
     OutStreamer.EmitLabel(PICBase);
@@ -475,7 +377,7 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     // popl $reg
     TmpInst.setOpcode(X86::POP32r);
     TmpInst.getOperand(0) = MCOperand::CreateReg(MI->getOperand(0).getReg());
-    printMCInst(&TmpInst);
+    OutStreamer.EmitInstruction(TmpInst);
     return;
   }
       
@@ -497,7 +399,7 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     OutStreamer.EmitLabel(DotSym);
     
     // Now that we have emitted the label, lower the complex operand expression.
-    MCSymbol *OpSym = MCInstLowering.GetExternalSymbolSymbol(MI->getOperand(2));
+    MCSymbol *OpSym = MCInstLowering.GetSymbolFromOperand(MI->getOperand(2));
     
     const MCExpr *DotExpr = MCSymbolRefExpr::Create(DotSym, OutContext);
     const MCExpr *PICBase =
@@ -512,7 +414,7 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
     TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(0).getReg()));
     TmpInst.addOperand(MCOperand::CreateReg(MI->getOperand(1).getReg()));
     TmpInst.addOperand(MCOperand::CreateExpr(DotExpr));
-    printMCInst(&TmpInst);
+    OutStreamer.EmitInstruction(TmpInst);
     return;
   }
   }
@@ -520,7 +422,6 @@ void X86AsmPrinter::printInstructionThroughMCStreamer(const MachineInstr *MI) {
   MCInst TmpInst;
   MCInstLowering.Lower(MI, TmpInst);
   
-  
-  printMCInst(&TmpInst);
+  OutStreamer.EmitInstruction(TmpInst);
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
index 94f8bfc..ebd23f6 100644
--- a/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
+++ b/libclamav/c++/llvm/lib/Target/X86/AsmPrinter/X86MCInstLower.h
@@ -39,11 +39,7 @@ public:
 
   MCSymbol *GetPICBaseSymbol() const;
   
-  MCSymbol *GetGlobalAddressSymbol(const MachineOperand &MO) const;
-  MCSymbol *GetExternalSymbolSymbol(const MachineOperand &MO) const;
-  MCSymbol *GetJumpTableSymbol(const MachineOperand &MO) const;
-  MCSymbol *GetConstantPoolIndexSymbol(const MachineOperand &MO) const;
-  MCSymbol *GetBlockAddressSymbol(const MachineOperand &MO) const;
+  MCSymbol *GetSymbolFromOperand(const MachineOperand &MO) const;
   MCOperand LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const;
   
 private:
diff --git a/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt b/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
index 4186fec..61f26a7 100644
--- a/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/CMakeLists.txt
@@ -25,6 +25,8 @@ set(sources
   X86InstrInfo.cpp
   X86JITInfo.cpp
   X86MCAsmInfo.cpp
+  X86MCCodeEmitter.cpp 
+  X86MCTargetExpr.cpp
   X86RegisterInfo.cpp
   X86Subtarget.cpp
   X86TargetMachine.cpp
diff --git a/libclamav/c++/llvm/lib/Target/X86/Makefile b/libclamav/c++/llvm/lib/Target/X86/Makefile
index 895868b..f4ff894 100644
--- a/libclamav/c++/llvm/lib/Target/X86/Makefile
+++ b/libclamav/c++/llvm/lib/Target/X86/Makefile
@@ -18,6 +18,7 @@ BUILT_SOURCES = X86GenRegisterInfo.h.inc X86GenRegisterNames.inc \
                 X86GenAsmWriter1.inc X86GenDAGISel.inc  \
                 X86GenDisassemblerTables.inc X86GenFastISel.inc \
                 X86GenCallingConv.inc X86GenSubtarget.inc \
+		X86GenEDInfo.inc
 
 DIRS = AsmPrinter AsmParser Disassembler TargetInfo
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/README-SSE.txt b/libclamav/c++/llvm/lib/Target/X86/README-SSE.txt
index 0f3e44b..19eb05e 100644
--- a/libclamav/c++/llvm/lib/Target/X86/README-SSE.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/README-SSE.txt
@@ -376,7 +376,7 @@ ret
 ... saving two instructions.
 
 The basic idea is that a reload from a spill slot, can, if only one 4-byte 
-chunk is used, bring in 3 zeros the the one element instead of 4 elements.
+chunk is used, bring in 3 zeros the one element instead of 4 elements.
 This can be used to simplify a variety of shuffle operations, where the
 elements are fixed zeros.
 
@@ -936,3 +936,54 @@ Also, the 'ret's should be shared.  This is PR6032.
 
 //===---------------------------------------------------------------------===//
 
+These should compile into the same code (PR6214): Perhaps instcombine should
+canonicalize the former into the later?
+
+define float @foo(float %x) nounwind {
+  %t = bitcast float %x to i32
+  %s = and i32 %t, 2147483647
+  %d = bitcast i32 %s to float
+  ret float %d
+}
+
+declare float @fabsf(float %n)
+define float @bar(float %x) nounwind {
+  %d = call float @fabsf(float %x)
+  ret float %d
+}
+
+//===---------------------------------------------------------------------===//
+
+This IR (from PR6194):
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
+target triple = "x86_64-apple-darwin10.0.0"
+
+%0 = type { double, double }
+%struct.float3 = type { float, float, float }
+
+define void @test(%0, %struct.float3* nocapture %res) nounwind noinline ssp {
+entry:
+  %tmp18 = extractvalue %0 %0, 0                  ; <double> [#uses=1]
+  %tmp19 = bitcast double %tmp18 to i64           ; <i64> [#uses=1]
+  %tmp20 = zext i64 %tmp19 to i128                ; <i128> [#uses=1]
+  %tmp10 = lshr i128 %tmp20, 32                   ; <i128> [#uses=1]
+  %tmp11 = trunc i128 %tmp10 to i32               ; <i32> [#uses=1]
+  %tmp12 = bitcast i32 %tmp11 to float            ; <float> [#uses=1]
+  %tmp5 = getelementptr inbounds %struct.float3* %res, i64 0, i32 1 ; <float*> [#uses=1]
+  store float %tmp12, float* %tmp5
+  ret void
+}
+
+Compiles to:
+
+_test:                                  ## @test
+	movd	%xmm0, %rax
+	shrq	$32, %rax
+	movl	%eax, 4(%rdi)
+	ret
+
+This would be better kept in the SSE unit by treating XMM0 as a 4xfloat and
+doing a shuffle from v[1] to v[0] then a float store.
+
+//===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/X86/README-UNIMPLEMENTED.txt b/libclamav/c++/llvm/lib/Target/X86/README-UNIMPLEMENTED.txt
index 69dc8ee..c26c75a 100644
--- a/libclamav/c++/llvm/lib/Target/X86/README-UNIMPLEMENTED.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/README-UNIMPLEMENTED.txt
@@ -11,4 +11,4 @@ which would be great.
 2) vector comparisons
 3) vector fp<->int conversions: PR2683, PR2684, PR2685, PR2686, PR2688
 4) bitcasts from vectors to scalars: PR2804
-
+5) llvm.atomic.cmp.swap.i128.p0i128: PR3462
diff --git a/libclamav/c++/llvm/lib/Target/X86/README.txt b/libclamav/c++/llvm/lib/Target/X86/README.txt
index aa7bb3d..3c6138b 100644
--- a/libclamav/c++/llvm/lib/Target/X86/README.txt
+++ b/libclamav/c++/llvm/lib/Target/X86/README.txt
@@ -1868,3 +1868,69 @@ carried over to machine instructions. Asm printer (or JIT) can use this
 information to add the "lock" prefix.
 
 //===---------------------------------------------------------------------===//
+
+_Bool bar(int *x) { return *x & 1; }
+
+define zeroext i1 @bar(i32* nocapture %x) nounwind readonly {
+entry:
+  %tmp1 = load i32* %x                            ; <i32> [#uses=1]
+  %and = and i32 %tmp1, 1                         ; <i32> [#uses=1]
+  %tobool = icmp ne i32 %and, 0                   ; <i1> [#uses=1]
+  ret i1 %tobool
+}
+
+bar:                                                        # @bar
+# BB#0:                                                     # %entry
+	movl	4(%esp), %eax
+	movb	(%eax), %al
+	andb	$1, %al
+	movzbl	%al, %eax
+	ret
+
+Missed optimization: should be movl+andl.
+
+//===---------------------------------------------------------------------===//
+
+Consider the following two functions compiled with clang:
+_Bool foo(int *x) { return !(*x & 4); }
+unsigned bar(int *x) { return !(*x & 4); }
+
+foo:
+	movl	4(%esp), %eax
+	testb	$4, (%eax)
+	sete	%al
+	movzbl	%al, %eax
+	ret
+
+bar:
+	movl	4(%esp), %eax
+	movl	(%eax), %eax
+	shrl	$2, %eax
+	andl	$1, %eax
+	xorl	$1, %eax
+	ret
+
+The second function generates more code even though the two functions are
+are functionally identical.
+
+//===---------------------------------------------------------------------===//
+
+Take the following C code:
+int x(int y) { return (y & 63) << 14; }
+
+Code produced by gcc:
+	andl	$63, %edi
+	sall	$14, %edi
+	movl	%edi, %eax
+	ret
+
+Code produced by clang:
+	shll	$14, %edi
+	movl	%edi, %eax
+	andl	$1032192, %eax
+	ret
+
+The code produced by gcc is 3 bytes shorter.  This sort of construct often
+shows up with bitfields.
+
+//===---------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86.h b/libclamav/c++/llvm/lib/Target/X86/X86.h
index 684c61f..1a1e447 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86.h
@@ -23,6 +23,7 @@ class X86TargetMachine;
 class FunctionPass;
 class MachineCodeEmitter;
 class MCCodeEmitter;
+class MCContext;
 class JITCodeEmitter;
 class Target;
 class formatted_raw_ostream;
@@ -46,15 +47,13 @@ FunctionPass *createX87FPRegKillInserterPass();
 
 /// createX86CodeEmitterPass - Return a pass that emits the collected X86 code
 /// to the specified MCE object.
-
-FunctionPass *createX86CodeEmitterPass(X86TargetMachine &TM, 
-                                       MachineCodeEmitter &MCE);
 FunctionPass *createX86JITCodeEmitterPass(X86TargetMachine &TM,
                                           JITCodeEmitter &JCE);
-FunctionPass *createX86ObjectCodeEmitterPass(X86TargetMachine &TM,
-                                             ObjectCodeEmitter &OCE);
 
-MCCodeEmitter *createX86MCCodeEmitter(const Target &, TargetMachine &TM);
+MCCodeEmitter *createX86_32MCCodeEmitter(const Target &, TargetMachine &TM,
+                                         MCContext &Ctx);
+MCCodeEmitter *createX86_64MCCodeEmitter(const Target &, TargetMachine &TM,
+                                         MCContext &Ctx);
 
 /// createX86EmitCodeToMemory - Returns a pass that converts a register
 /// allocated function into raw machine code in a dynamically
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.cpp
index ea52795..ab67acb 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.cpp
@@ -27,90 +27,55 @@ X86COFFMachineModuleInfo::X86COFFMachineModuleInfo(const MachineModuleInfo &) {
 X86COFFMachineModuleInfo::~X86COFFMachineModuleInfo() {
 }
 
-void X86COFFMachineModuleInfo::AddFunctionInfo(const Function *F,
-                                            const X86MachineFunctionInfo &Val) {
-  FunctionInfoMap[F] = Val;
+void X86COFFMachineModuleInfo::addExternalFunction(const StringRef& Name) {
+  CygMingStubs.insert(Name);
 }
 
-
-
-static X86MachineFunctionInfo calculateFunctionInfo(const Function *F,
-                                                    const TargetData &TD) {
-  X86MachineFunctionInfo Info;
-  uint64_t Size = 0;
-  
-  switch (F->getCallingConv()) {
-  case CallingConv::X86_StdCall:
-    Info.setDecorationStyle(StdCall);
-    break;
-  case CallingConv::X86_FastCall:
-    Info.setDecorationStyle(FastCall);
-    break;
-  default:
-    return Info;
-  }
-  
-  unsigned argNum = 1;
-  for (Function::const_arg_iterator AI = F->arg_begin(), AE = F->arg_end();
-       AI != AE; ++AI, ++argNum) {
-    const Type* Ty = AI->getType();
-    
-    // 'Dereference' type in case of byval parameter attribute
-    if (F->paramHasAttr(argNum, Attribute::ByVal))
-      Ty = cast<PointerType>(Ty)->getElementType();
-    
-    // Size should be aligned to DWORD boundary
-    Size += ((TD.getTypeAllocSize(Ty) + 3)/4)*4;
-  }
-  
-  // We're not supporting tooooo huge arguments :)
-  Info.setBytesToPopOnReturn((unsigned int)Size);
-  return Info;
-}
-
-
-/// DecorateCygMingName - Query FunctionInfoMap and use this information for
-/// various name decorations for Cygwin and MingW.
+/// DecorateCygMingName - Apply various name decorations if the function uses
+/// stdcall or fastcall calling convention.
 void X86COFFMachineModuleInfo::DecorateCygMingName(SmallVectorImpl<char> &Name,
                                                    const GlobalValue *GV,
                                                    const TargetData &TD) {
   const Function *F = dyn_cast<Function>(GV);
   if (!F) return;
-  
-  // Save function name for later type emission.
-  if (F->isDeclaration())
-    CygMingStubs.insert(StringRef(Name.data(), Name.size()));
-  
+
   // We don't want to decorate non-stdcall or non-fastcall functions right now
   CallingConv::ID CC = F->getCallingConv();
   if (CC != CallingConv::X86_StdCall && CC != CallingConv::X86_FastCall)
     return;
-  
-  const X86MachineFunctionInfo *Info;
-  
-  FMFInfoMap::const_iterator info_item = FunctionInfoMap.find(F);
-  if (info_item == FunctionInfoMap.end()) {
-    // Calculate apropriate function info and populate map
-    FunctionInfoMap[F] = calculateFunctionInfo(F, TD);
-    Info = &FunctionInfoMap[F];
-  } else {
-    Info = &info_item->second;
-  }
-  
-  if (Info->getDecorationStyle() == None) return;
+
+  unsigned ArgWords = 0;
+  DenseMap<const Function*, unsigned>::const_iterator item = FnArgWords.find(F);
+  if (item == FnArgWords.end()) {
+    // Calculate arguments sizes
+    for (Function::const_arg_iterator AI = F->arg_begin(), AE = F->arg_end();
+         AI != AE; ++AI) {
+      const Type* Ty = AI->getType();
+
+      // 'Dereference' type in case of byval parameter attribute
+      if (AI->hasByValAttr())
+        Ty = cast<PointerType>(Ty)->getElementType();
+
+      // Size should be aligned to DWORD boundary
+      ArgWords += ((TD.getTypeAllocSize(Ty) + 3)/4)*4;
+    }
+
+    FnArgWords[F] = ArgWords;
+  } else
+    ArgWords = item->second;
+
   const FunctionType *FT = F->getFunctionType();
-  
   // "Pure" variadic functions do not receive @0 suffix.
   if (!FT->isVarArg() || FT->getNumParams() == 0 ||
       (FT->getNumParams() == 1 && F->hasStructRetAttr()))
-    raw_svector_ostream(Name) << '@' << Info->getBytesToPopOnReturn();
-  
-  if (Info->getDecorationStyle() == FastCall) {
+    raw_svector_ostream(Name) << '@' << ArgWords;
+
+  if (CC == CallingConv::X86_FastCall) {
     if (Name[0] == '_')
       Name[0] = '@';
     else
       Name.insert(Name.begin(), '@');
-  }    
+  }
 }
 
 /// DecorateCygMingName - Query FunctionInfoMap and use this information for
@@ -121,6 +86,6 @@ void X86COFFMachineModuleInfo::DecorateCygMingName(MCSymbol *&Name,
                                                    const TargetData &TD) {
   SmallString<128> NameStr(Name->getName().begin(), Name->getName().end());
   DecorateCygMingName(NameStr, GV, TD);
-  
+
   Name = Ctx.GetOrCreateSymbol(NameStr.str());
 }
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
index 0e2009e..9de3dcd 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86COFFMachineModuleInfo.h
@@ -21,44 +21,25 @@
 namespace llvm {
   class X86MachineFunctionInfo;
   class TargetData;
-  
+
 /// X86COFFMachineModuleInfo - This is a MachineModuleInfoImpl implementation
 /// for X86 COFF targets.
 class X86COFFMachineModuleInfo : public MachineModuleInfoImpl {
   StringSet<> CygMingStubs;
-  
-  // We have to propagate some information about MachineFunction to
-  // AsmPrinter. It's ok, when we're printing the function, since we have
-  // access to MachineFunction and can get the appropriate MachineFunctionInfo.
-  // Unfortunately, this is not possible when we're printing reference to
-  // Function (e.g. calling it and so on). Even more, there is no way to get the
-  // corresponding MachineFunctions: it can even be not created at all. That's
-  // why we should use additional structure, when we're collecting all necessary
-  // information.
-  //
-  // This structure is using e.g. for name decoration for stdcall & fastcall'ed
-  // function, since we have to use arguments' size for decoration.
-  typedef std::map<const Function*, X86MachineFunctionInfo> FMFInfoMap;
-  FMFInfoMap FunctionInfoMap;
-  
+  DenseMap<const Function*, unsigned> FnArgWords;
 public:
   X86COFFMachineModuleInfo(const MachineModuleInfo &);
   ~X86COFFMachineModuleInfo();
-  
-  
+
   void DecorateCygMingName(MCSymbol* &Name, MCContext &Ctx,
                            const GlobalValue *GV, const TargetData &TD);
   void DecorateCygMingName(SmallVectorImpl<char> &Name, const GlobalValue *GV,
                            const TargetData &TD);
-  
-  void AddFunctionInfo(const Function *F, const X86MachineFunctionInfo &Val);
-  
 
+  void addExternalFunction(const StringRef& Name);
   typedef StringSet<>::const_iterator stub_iterator;
   stub_iterator stub_begin() const { return CygMingStubs.begin(); }
   stub_iterator stub_end() const { return CygMingStubs.end(); }
-
-  
 };
 
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
index e2c3139..8deadf6 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86CodeEmitter.cpp
@@ -21,9 +21,7 @@
 #include "X86.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/PassManager.h"
-#include "llvm/CodeGen/MachineCodeEmitter.h"
 #include "llvm/CodeGen/JITCodeEmitter.h"
-#include "llvm/CodeGen/ObjectCodeEmitter.h"
 #include "llvm/CodeGen/MachineFunctionPass.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineModuleInfo.h"
@@ -110,19 +108,10 @@ template<class CodeEmitter>
 
 /// createX86CodeEmitterPass - Return a pass that emits the collected X86 code
 /// to the specified templated MachineCodeEmitter object.
-
-FunctionPass *llvm::createX86CodeEmitterPass(X86TargetMachine &TM,
-                                             MachineCodeEmitter &MCE) {
-  return new Emitter<MachineCodeEmitter>(TM, MCE);
-}
 FunctionPass *llvm::createX86JITCodeEmitterPass(X86TargetMachine &TM,
                                                 JITCodeEmitter &JCE) {
   return new Emitter<JITCodeEmitter>(TM, JCE);
 }
-FunctionPass *llvm::createX86ObjectCodeEmitterPass(X86TargetMachine &TM,
-                                                   ObjectCodeEmitter &OCE) {
-  return new Emitter<ObjectCodeEmitter>(TM, OCE);
-}
 
 template<class CodeEmitter>
 bool Emitter<CodeEmitter>::runOnMachineFunction(MachineFunction &MF) {
@@ -202,8 +191,15 @@ template<class CodeEmitter>
 void Emitter<CodeEmitter>::emitExternalSymbolAddress(const char *ES,
                                                      unsigned Reloc) {
   intptr_t RelocCST = (Reloc == X86::reloc_picrel_word) ? PICBaseOffset : 0;
+
+  // X86 never needs stubs because instruction selection will always pick
+  // an instruction sequence that is large enough to hold any address
+  // to a symbol.
+  // (see X86ISelLowering.cpp, near 2039: X86TargetLowering::LowerCall)
+  bool NeedStub = false;
   MCE.addRelocation(MachineRelocation::getExtSym(MCE.getCurrentPCOffset(),
-                                                 Reloc, ES, RelocCST));
+                                                 Reloc, ES, RelocCST,
+                                                 0, NeedStub));
   if (Reloc == X86::reloc_absolute_dword)
     MCE.emitDWordLE(0);
   else
@@ -253,7 +249,7 @@ void Emitter<CodeEmitter>::emitJumpTableAddress(unsigned JTI, unsigned Reloc,
 
 template<class CodeEmitter>
 unsigned Emitter<CodeEmitter>::getX86RegNum(unsigned RegNo) const {
-  return II->getRegisterInfo().getX86RegNum(RegNo);
+  return X86RegisterInfo::getX86RegNum(RegNo);
 }
 
 inline static unsigned char ModRMByte(unsigned Mod, unsigned RegOpcode,
@@ -391,86 +387,103 @@ void Emitter<CodeEmitter>::emitMemModRMByte(const MachineInstr &MI,
   // If no BaseReg, issue a RIP relative instruction only if the MCE can 
   // resolve addresses on-the-fly, otherwise use SIB (Intel Manual 2A, table
   // 2-7) and absolute references.
-  if ((!Is64BitMode || DispForReloc || BaseReg != 0) &&
+  unsigned BaseRegNo = -1U;
+  if (BaseReg != 0 && BaseReg != X86::RIP)
+    BaseRegNo = getX86RegNum(BaseReg);
+
+  if (// The SIB byte must be used if there is an index register.
       IndexReg.getReg() == 0 && 
-      ((BaseReg == 0 && MCE.earlyResolveAddresses()) || BaseReg == X86::RIP || 
-       (BaseReg != 0 && getX86RegNum(BaseReg) != N86::ESP))) {
-    if (BaseReg == 0 || BaseReg == X86::RIP) {  // Just a displacement?
-      // Emit special case [disp32] encoding
+      // The SIB byte must be used if the base is ESP/RSP/R12, all of which
+      // encode to an R/M value of 4, which indicates that a SIB byte is
+      // present.
+      BaseRegNo != N86::ESP &&
+      // If there is no base register and we're in 64-bit mode, we need a SIB
+      // byte to emit an addr that is just 'disp32' (the non-RIP relative form).
+      (!Is64BitMode || BaseReg != 0)) {
+    if (BaseReg == 0 ||          // [disp32]     in X86-32 mode
+        BaseReg == X86::RIP) {   // [disp32+RIP] in X86-64 mode
       MCE.emitByte(ModRMByte(0, RegOpcodeField, 5));
       emitDisplacementField(DispForReloc, DispVal, PCAdj, true);
-    } else {
-      unsigned BaseRegNo = getX86RegNum(BaseReg);
-      if (!DispForReloc && DispVal == 0 && BaseRegNo != N86::EBP) {
-        // Emit simple indirect register encoding... [EAX] f.e.
-        MCE.emitByte(ModRMByte(0, RegOpcodeField, BaseRegNo));
-      } else if (!DispForReloc && isDisp8(DispVal)) {
-        // Emit the disp8 encoding... [REG+disp8]
-        MCE.emitByte(ModRMByte(1, RegOpcodeField, BaseRegNo));
-        emitConstant(DispVal, 1);
-      } else {
-        // Emit the most general non-SIB encoding: [REG+disp32]
-        MCE.emitByte(ModRMByte(2, RegOpcodeField, BaseRegNo));
-        emitDisplacementField(DispForReloc, DispVal, PCAdj, IsPCRel);
-      }
+      return;
     }
-
-  } else {  // We need a SIB byte, so start by outputting the ModR/M byte first
-    assert(IndexReg.getReg() != X86::ESP &&
-           IndexReg.getReg() != X86::RSP && "Cannot use ESP as index reg!");
-
-    bool ForceDisp32 = false;
-    bool ForceDisp8  = false;
-    if (BaseReg == 0) {
-      // If there is no base register, we emit the special case SIB byte with
-      // MOD=0, BASE=5, to JUST get the index, scale, and displacement.
-      MCE.emitByte(ModRMByte(0, RegOpcodeField, 4));
-      ForceDisp32 = true;
-    } else if (DispForReloc) {
-      // Emit the normal disp32 encoding.
-      MCE.emitByte(ModRMByte(2, RegOpcodeField, 4));
-      ForceDisp32 = true;
-    } else if (DispVal == 0 && getX86RegNum(BaseReg) != N86::EBP) {
-      // Emit no displacement ModR/M byte
-      MCE.emitByte(ModRMByte(0, RegOpcodeField, 4));
-    } else if (isDisp8(DispVal)) {
-      // Emit the disp8 encoding...
-      MCE.emitByte(ModRMByte(1, RegOpcodeField, 4));
-      ForceDisp8 = true;           // Make sure to force 8 bit disp if Base=EBP
-    } else {
-      // Emit the normal disp32 encoding...
-      MCE.emitByte(ModRMByte(2, RegOpcodeField, 4));
-    }
-
-    // Calculate what the SS field value should be...
-    static const unsigned SSTable[] = { ~0, 0, 1, ~0, 2, ~0, ~0, ~0, 3 };
-    unsigned SS = SSTable[Scale.getImm()];
-
-    if (BaseReg == 0) {
-      // Handle the SIB byte for the case where there is no base, see Intel 
-      // Manual 2A, table 2-7. The displacement has already been output.
-      unsigned IndexRegNo;
-      if (IndexReg.getReg())
-        IndexRegNo = getX86RegNum(IndexReg.getReg());
-      else // Examples: [ESP+1*<noreg>+4] or [scaled idx]+disp32 (MOD=0,BASE=5)
-        IndexRegNo = 4;
-      emitSIBByte(SS, IndexRegNo, 5);
-    } else {
-      unsigned BaseRegNo = getX86RegNum(BaseReg);
-      unsigned IndexRegNo;
-      if (IndexReg.getReg())
-        IndexRegNo = getX86RegNum(IndexReg.getReg());
-      else
-        IndexRegNo = 4;   // For example [ESP+1*<noreg>+4]
-      emitSIBByte(SS, IndexRegNo, BaseRegNo);
+    
+    // If the base is not EBP/ESP and there is no displacement, use simple
+    // indirect register encoding, this handles addresses like [EAX].  The
+    // encoding for [EBP] with no displacement means [disp32] so we handle it
+    // by emitting a displacement of 0 below.
+    if (!DispForReloc && DispVal == 0 && BaseRegNo != N86::EBP) {
+      MCE.emitByte(ModRMByte(0, RegOpcodeField, BaseRegNo));
+      return;
     }
-
-    // Do we need to output a displacement?
-    if (ForceDisp8) {
+    
+    // Otherwise, if the displacement fits in a byte, encode as [REG+disp8].
+    if (!DispForReloc && isDisp8(DispVal)) {
+      MCE.emitByte(ModRMByte(1, RegOpcodeField, BaseRegNo));
       emitConstant(DispVal, 1);
-    } else if (DispVal != 0 || ForceDisp32) {
-      emitDisplacementField(DispForReloc, DispVal, PCAdj, IsPCRel);
+      return;
     }
+    
+    // Otherwise, emit the most general non-SIB encoding: [REG+disp32]
+    MCE.emitByte(ModRMByte(2, RegOpcodeField, BaseRegNo));
+    emitDisplacementField(DispForReloc, DispVal, PCAdj, IsPCRel);
+    return;
+  }
+  
+  // Otherwise we need a SIB byte, so start by outputting the ModR/M byte first.
+  assert(IndexReg.getReg() != X86::ESP &&
+         IndexReg.getReg() != X86::RSP && "Cannot use ESP as index reg!");
+
+  bool ForceDisp32 = false;
+  bool ForceDisp8  = false;
+  if (BaseReg == 0) {
+    // If there is no base register, we emit the special case SIB byte with
+    // MOD=0, BASE=4, to JUST get the index, scale, and displacement.
+    MCE.emitByte(ModRMByte(0, RegOpcodeField, 4));
+    ForceDisp32 = true;
+  } else if (DispForReloc) {
+    // Emit the normal disp32 encoding.
+    MCE.emitByte(ModRMByte(2, RegOpcodeField, 4));
+    ForceDisp32 = true;
+  } else if (DispVal == 0 && getX86RegNum(BaseReg) != N86::EBP) {
+    // Emit no displacement ModR/M byte
+    MCE.emitByte(ModRMByte(0, RegOpcodeField, 4));
+  } else if (isDisp8(DispVal)) {
+    // Emit the disp8 encoding...
+    MCE.emitByte(ModRMByte(1, RegOpcodeField, 4));
+    ForceDisp8 = true;           // Make sure to force 8 bit disp if Base=EBP
+  } else {
+    // Emit the normal disp32 encoding...
+    MCE.emitByte(ModRMByte(2, RegOpcodeField, 4));
+  }
+
+  // Calculate what the SS field value should be...
+  static const unsigned SSTable[] = { ~0, 0, 1, ~0, 2, ~0, ~0, ~0, 3 };
+  unsigned SS = SSTable[Scale.getImm()];
+
+  if (BaseReg == 0) {
+    // Handle the SIB byte for the case where there is no base, see Intel 
+    // Manual 2A, table 2-7. The displacement has already been output.
+    unsigned IndexRegNo;
+    if (IndexReg.getReg())
+      IndexRegNo = getX86RegNum(IndexReg.getReg());
+    else // Examples: [ESP+1*<noreg>+4] or [scaled idx]+disp32 (MOD=0,BASE=5)
+      IndexRegNo = 4;
+    emitSIBByte(SS, IndexRegNo, 5);
+  } else {
+    unsigned BaseRegNo = getX86RegNum(BaseReg);
+    unsigned IndexRegNo;
+    if (IndexReg.getReg())
+      IndexRegNo = getX86RegNum(IndexReg.getReg());
+    else
+      IndexRegNo = 4;   // For example [ESP+1*<noreg>+4]
+    emitSIBByte(SS, IndexRegNo, BaseRegNo);
+  }
+
+  // Do we need to output a displacement?
+  if (ForceDisp8) {
+    emitConstant(DispVal, 1);
+  } else if (DispVal != 0 || ForceDisp32) {
+    emitDisplacementField(DispForReloc, DispVal, PCAdj, IsPCRel);
   }
 }
 
@@ -570,7 +583,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     // Skip the last source operand that is tied_to the dest reg. e.g. LXADD32
     --NumOps;
 
-  unsigned char BaseOpcode = II->getBaseOpcodeFor(Desc);
+  unsigned char BaseOpcode = X86II::getBaseOpcodeFor(Desc->TSFlags);
   switch (Desc->TSFlags & X86II::FormMask) {
   default:
     llvm_unreachable("Unknown FormMask value in X86 MachineCodeEmitter!");
@@ -582,25 +595,25 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       llvm_unreachable("psuedo instructions should be removed before code"
                        " emission");
       break;
-    case TargetInstrInfo::INLINEASM:
+    case TargetOpcode::INLINEASM:
       // We allow inline assembler nodes with empty bodies - they can
       // implicitly define registers, which is ok for JIT.
       if (MI.getOperand(0).getSymbolName()[0])
         llvm_report_error("JIT does not support inline asm!");
       break;
-    case TargetInstrInfo::DBG_LABEL:
-    case TargetInstrInfo::EH_LABEL:
-    case TargetInstrInfo::GC_LABEL:
+    case TargetOpcode::DBG_LABEL:
+    case TargetOpcode::EH_LABEL:
+    case TargetOpcode::GC_LABEL:
       MCE.emitLabel(MI.getOperand(0).getImm());
       break;
-    case TargetInstrInfo::IMPLICIT_DEF:
-    case TargetInstrInfo::KILL:
+    case TargetOpcode::IMPLICIT_DEF:
+    case TargetOpcode::KILL:
     case X86::FP_REG_KILL:
       break;
     case X86::MOVPC32r: {
       // This emits the "call" portion of this pseudo instruction.
       MCE.emitByte(BaseOpcode);
-      emitConstant(0, X86InstrInfo::sizeOfImm(Desc));
+      emitConstant(0, X86II::getSizeOfImm(Desc->TSFlags));
       // Remember PIC base.
       PICBaseOffset = (intptr_t) MCE.getCurrentPCOffset();
       X86JITInfo *JTI = TM.getJITInfo();
@@ -639,15 +652,21 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       emitExternalSymbolAddress(MO.getSymbolName(), X86::reloc_pcrel_word);
       break;
     }
+
+    // FIXME: Only used by hackish MCCodeEmitter, remove when dead.
+    if (MO.isJTI()) {
+      emitJumpTableAddress(MO.getIndex(), X86::reloc_pcrel_word);
+      break;
+    }
     
     assert(MO.isImm() && "Unknown RawFrm operand!");
     if (Opcode == X86::CALLpcrel32 || Opcode == X86::CALL64pcrel32) {
       // Fix up immediate operand for pc relative calls.
       intptr_t Imm = (intptr_t)MO.getImm();
       Imm = Imm - MCE.getCurrentPCValue() - 4;
-      emitConstant(Imm, X86InstrInfo::sizeOfImm(Desc));
+      emitConstant(Imm, X86II::getSizeOfImm(Desc->TSFlags));
     } else
-      emitConstant(MO.getImm(), X86InstrInfo::sizeOfImm(Desc));
+      emitConstant(MO.getImm(), X86II::getSizeOfImm(Desc->TSFlags));
     break;
   }
       
@@ -658,7 +677,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       break;
       
     const MachineOperand &MO1 = MI.getOperand(CurOp++);
-    unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+    unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
     if (MO1.isImm()) {
       emitConstant(MO1.getImm(), Size);
       break;
@@ -691,7 +710,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     CurOp += 2;
     if (CurOp != NumOps)
       emitConstant(MI.getOperand(CurOp++).getImm(),
-                   X86InstrInfo::sizeOfImm(Desc));
+                   X86II::getSizeOfImm(Desc->TSFlags));
     break;
   }
   case X86II::MRMDestMem: {
@@ -702,7 +721,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     CurOp +=  X86AddrNumOperands + 1;
     if (CurOp != NumOps)
       emitConstant(MI.getOperand(CurOp++).getImm(),
-                   X86InstrInfo::sizeOfImm(Desc));
+                   X86II::getSizeOfImm(Desc->TSFlags));
     break;
   }
 
@@ -713,7 +732,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     CurOp += 2;
     if (CurOp != NumOps)
       emitConstant(MI.getOperand(CurOp++).getImm(),
-                   X86InstrInfo::sizeOfImm(Desc));
+                   X86II::getSizeOfImm(Desc->TSFlags));
     break;
 
   case X86II::MRMSrcMem: {
@@ -726,7 +745,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       AddrOperands = X86AddrNumOperands;
 
     intptr_t PCAdj = (CurOp + AddrOperands + 1 != NumOps) ?
-      X86InstrInfo::sizeOfImm(Desc) : 0;
+      X86II::getSizeOfImm(Desc->TSFlags) : 0;
 
     MCE.emitByte(BaseOpcode);
     emitMemModRMByte(MI, CurOp+1, getX86RegNum(MI.getOperand(CurOp).getReg()),
@@ -734,7 +753,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
     CurOp += AddrOperands + 1;
     if (CurOp != NumOps)
       emitConstant(MI.getOperand(CurOp++).getImm(),
-                   X86InstrInfo::sizeOfImm(Desc));
+                   X86II::getSizeOfImm(Desc->TSFlags));
     break;
   }
 
@@ -743,33 +762,14 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
   case X86II::MRM4r: case X86II::MRM5r:
   case X86II::MRM6r: case X86II::MRM7r: {
     MCE.emitByte(BaseOpcode);
-
-    // Special handling of lfence, mfence, monitor, and mwait.
-    if (Desc->getOpcode() == X86::LFENCE ||
-        Desc->getOpcode() == X86::MFENCE ||
-        Desc->getOpcode() == X86::MONITOR ||
-        Desc->getOpcode() == X86::MWAIT) {
-      emitRegModRMByte((Desc->TSFlags & X86II::FormMask)-X86II::MRM0r);
-
-      switch (Desc->getOpcode()) {
-      default: break;
-      case X86::MONITOR:
-        MCE.emitByte(0xC8);
-        break;
-      case X86::MWAIT:
-        MCE.emitByte(0xC9);
-        break;
-      }
-    } else {
-      emitRegModRMByte(MI.getOperand(CurOp++).getReg(),
-                       (Desc->TSFlags & X86II::FormMask)-X86II::MRM0r);
-    }
+    emitRegModRMByte(MI.getOperand(CurOp++).getReg(),
+                     (Desc->TSFlags & X86II::FormMask)-X86II::MRM0r);
 
     if (CurOp == NumOps)
       break;
     
     const MachineOperand &MO1 = MI.getOperand(CurOp++);
-    unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+    unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
     if (MO1.isImm()) {
       emitConstant(MO1.getImm(), Size);
       break;
@@ -798,7 +798,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
   case X86II::MRM6m: case X86II::MRM7m: {
     intptr_t PCAdj = (CurOp + X86AddrNumOperands != NumOps) ?
       (MI.getOperand(CurOp+X86AddrNumOperands).isImm() ? 
-          X86InstrInfo::sizeOfImm(Desc) : 4) : 0;
+          X86II::getSizeOfImm(Desc->TSFlags) : 4) : 0;
 
     MCE.emitByte(BaseOpcode);
     emitMemModRMByte(MI, CurOp, (Desc->TSFlags & X86II::FormMask)-X86II::MRM0m,
@@ -809,7 +809,7 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
       break;
     
     const MachineOperand &MO = MI.getOperand(CurOp++);
-    unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+    unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
     if (MO.isImm()) {
       emitConstant(MO.getImm(), Size);
       break;
@@ -839,6 +839,27 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
                      getX86RegNum(MI.getOperand(CurOp).getReg()));
     ++CurOp;
     break;
+      
+  case X86II::MRM_C1:
+    MCE.emitByte(BaseOpcode);
+    MCE.emitByte(0xC1);
+    break;
+  case X86II::MRM_C8:
+    MCE.emitByte(BaseOpcode);
+    MCE.emitByte(0xC8);
+    break;
+  case X86II::MRM_C9:
+    MCE.emitByte(BaseOpcode);
+    MCE.emitByte(0xC9);
+    break;
+  case X86II::MRM_E8:
+    MCE.emitByte(BaseOpcode);
+    MCE.emitByte(0xE8);
+    break;
+  case X86II::MRM_F0:
+    MCE.emitByte(BaseOpcode);
+    MCE.emitByte(0xF0);
+    break;
   }
 
   if (!Desc->isVariadic() && CurOp != NumOps) {
@@ -850,256 +871,3 @@ void Emitter<CodeEmitter>::emitInstruction(const MachineInstr &MI,
 
   MCE.processDebugLoc(MI.getDebugLoc(), false);
 }
-
-// Adapt the Emitter / CodeEmitter interfaces to MCCodeEmitter.
-//
-// FIXME: This is a total hack designed to allow work on llvm-mc to proceed
-// without being blocked on various cleanups needed to support a clean interface
-// to instruction encoding.
-//
-// Look away!
-
-#include "llvm/DerivedTypes.h"
-
-namespace {
-class MCSingleInstructionCodeEmitter : public MachineCodeEmitter {
-  uint8_t Data[256];
-
-public:
-  MCSingleInstructionCodeEmitter() { reset(); }
-
-  void reset() { 
-    BufferBegin = Data;
-    BufferEnd = array_endof(Data);
-    CurBufferPtr = Data;
-  }
-
-  StringRef str() {
-    return StringRef(reinterpret_cast<char*>(BufferBegin),
-                     CurBufferPtr - BufferBegin);
-  }
-
-  virtual void startFunction(MachineFunction &F) {}
-  virtual bool finishFunction(MachineFunction &F) { return false; }
-  virtual void emitLabel(uint64_t LabelID) {}
-  virtual void StartMachineBasicBlock(MachineBasicBlock *MBB) {}
-  virtual bool earlyResolveAddresses() const { return false; }
-  virtual void addRelocation(const MachineRelocation &MR) { }
-  virtual uintptr_t getConstantPoolEntryAddress(unsigned Index) const {
-    return 0;
-  }
-  virtual uintptr_t getJumpTableEntryAddress(unsigned Index) const {
-    return 0;
-  }
-  virtual uintptr_t getMachineBasicBlockAddress(MachineBasicBlock *MBB) const {
-    return 0;
-  }
-  virtual uintptr_t getLabelAddress(uint64_t LabelID) const {
-    return 0;
-  }
-  virtual void setModuleInfo(MachineModuleInfo* Info) {}
-};
-
-class X86MCCodeEmitter : public MCCodeEmitter {
-  X86MCCodeEmitter(const X86MCCodeEmitter &); // DO NOT IMPLEMENT
-  void operator=(const X86MCCodeEmitter &); // DO NOT IMPLEMENT
-
-private:
-  X86TargetMachine &TM;
-  llvm::Function *DummyF;
-  TargetData *DummyTD;
-  mutable llvm::MachineFunction *DummyMF;
-  llvm::MachineBasicBlock *DummyMBB;
-  
-  MCSingleInstructionCodeEmitter *InstrEmitter;
-  Emitter<MachineCodeEmitter> *Emit;
-
-public:
-  X86MCCodeEmitter(X86TargetMachine &_TM) : TM(_TM) {
-    // Verily, thou shouldst avert thine eyes.
-    const llvm::FunctionType *FTy =
-      FunctionType::get(llvm::Type::getVoidTy(getGlobalContext()), false);
-    DummyF = Function::Create(FTy, GlobalValue::InternalLinkage);
-    DummyTD = new TargetData("");
-    DummyMF = new MachineFunction(DummyF, TM, 0);
-    DummyMBB = DummyMF->CreateMachineBasicBlock();
-
-    InstrEmitter = new MCSingleInstructionCodeEmitter();
-    Emit = new Emitter<MachineCodeEmitter>(TM, *InstrEmitter, 
-                                           *TM.getInstrInfo(),
-                                           *DummyTD, false);
-  }
-  ~X86MCCodeEmitter() {
-    delete Emit;
-    delete InstrEmitter;
-    delete DummyMF;
-    delete DummyF;
-  }
-
-  bool AddRegToInstr(const MCInst &MI, MachineInstr *Instr,
-                     unsigned Start) const {
-    if (Start + 1 > MI.getNumOperands())
-      return false;
-
-    const MCOperand &Op = MI.getOperand(Start);
-    if (!Op.isReg()) return false;
-
-    Instr->addOperand(MachineOperand::CreateReg(Op.getReg(), false));
-    return true;
-  }
-
-  bool AddImmToInstr(const MCInst &MI, MachineInstr *Instr,
-                     unsigned Start) const {
-    if (Start + 1 > MI.getNumOperands())
-      return false;
-
-    const MCOperand &Op = MI.getOperand(Start);
-    if (Op.isImm()) {
-      Instr->addOperand(MachineOperand::CreateImm(Op.getImm()));
-      return true;
-    }
-    if (!Op.isExpr())
-      return false;
-
-    const MCExpr *Expr = Op.getExpr();
-    if (const MCConstantExpr *CE = dyn_cast<MCConstantExpr>(Expr)) {
-      Instr->addOperand(MachineOperand::CreateImm(CE->getValue()));
-      return true;
-    }
-
-    // FIXME: Relocation / fixup.
-    Instr->addOperand(MachineOperand::CreateImm(0));
-    return true;
-  }
-
-  bool AddLMemToInstr(const MCInst &MI, MachineInstr *Instr,
-                     unsigned Start) const {
-    return (AddRegToInstr(MI, Instr, Start + 0) &&
-            AddImmToInstr(MI, Instr, Start + 1) &&
-            AddRegToInstr(MI, Instr, Start + 2) &&
-            AddImmToInstr(MI, Instr, Start + 3));
-  }
-
-  bool AddMemToInstr(const MCInst &MI, MachineInstr *Instr,
-                     unsigned Start) const {
-    return (AddRegToInstr(MI, Instr, Start + 0) &&
-            AddImmToInstr(MI, Instr, Start + 1) &&
-            AddRegToInstr(MI, Instr, Start + 2) &&
-            AddImmToInstr(MI, Instr, Start + 3) &&
-            AddRegToInstr(MI, Instr, Start + 4));
-  }
-
-  void EncodeInstruction(const MCInst &MI, raw_ostream &OS) const {
-    // Don't look yet!
-
-    // Convert the MCInst to a MachineInstr so we can (ab)use the regular
-    // emitter.
-    const X86InstrInfo &II = *TM.getInstrInfo();
-    const TargetInstrDesc &Desc = II.get(MI.getOpcode());    
-    MachineInstr *Instr = DummyMF->CreateMachineInstr(Desc, DebugLoc());
-    DummyMBB->push_back(Instr);
-
-    unsigned Opcode = MI.getOpcode();
-    unsigned NumOps = MI.getNumOperands();
-    unsigned CurOp = 0;
-    if (NumOps > 1 && Desc.getOperandConstraint(1, TOI::TIED_TO) != -1) {
-      Instr->addOperand(MachineOperand::CreateReg(0, false));
-      ++CurOp;
-    } else if (NumOps > 2 && 
-             Desc.getOperandConstraint(NumOps-1, TOI::TIED_TO)== 0)
-      // Skip the last source operand that is tied_to the dest reg. e.g. LXADD32
-      --NumOps;
-
-    bool OK = true;
-    switch (Desc.TSFlags & X86II::FormMask) {
-    case X86II::MRMDestReg:
-    case X86II::MRMSrcReg:
-      // Matching doesn't fill this in completely, we have to choose operand 0
-      // for a tied register.
-      OK &= AddRegToInstr(MI, Instr, 0); CurOp++;
-      OK &= AddRegToInstr(MI, Instr, CurOp++);
-      if (CurOp < NumOps)
-        OK &= AddImmToInstr(MI, Instr, CurOp);
-      break;
-
-    case X86II::RawFrm:
-      if (CurOp < NumOps) {
-        // Hack to make branches work.
-        if (!(Desc.TSFlags & X86II::ImmMask) &&
-            MI.getOperand(0).isExpr() &&
-            isa<MCSymbolRefExpr>(MI.getOperand(0).getExpr()))
-          Instr->addOperand(MachineOperand::CreateMBB(DummyMBB));
-        else
-          OK &= AddImmToInstr(MI, Instr, CurOp);
-      }
-      break;
-
-    case X86II::AddRegFrm:
-      OK &= AddRegToInstr(MI, Instr, CurOp++);
-      if (CurOp < NumOps)
-        OK &= AddImmToInstr(MI, Instr, CurOp);
-      break;
-
-    case X86II::MRM0r: case X86II::MRM1r:
-    case X86II::MRM2r: case X86II::MRM3r:
-    case X86II::MRM4r: case X86II::MRM5r:
-    case X86II::MRM6r: case X86II::MRM7r:
-      // Matching doesn't fill this in completely, we have to choose operand 0
-      // for a tied register.
-      OK &= AddRegToInstr(MI, Instr, 0); CurOp++;
-      if (CurOp < NumOps)
-        OK &= AddImmToInstr(MI, Instr, CurOp);
-      break;
-      
-    case X86II::MRM0m: case X86II::MRM1m:
-    case X86II::MRM2m: case X86II::MRM3m:
-    case X86II::MRM4m: case X86II::MRM5m:
-    case X86II::MRM6m: case X86II::MRM7m:
-      OK &= AddMemToInstr(MI, Instr, CurOp); CurOp += 5;
-      if (CurOp < NumOps)
-        OK &= AddImmToInstr(MI, Instr, CurOp);
-      break;
-
-    case X86II::MRMSrcMem:
-      OK &= AddRegToInstr(MI, Instr, CurOp++);
-      if (Opcode == X86::LEA64r || Opcode == X86::LEA64_32r ||
-          Opcode == X86::LEA16r || Opcode == X86::LEA32r)
-        OK &= AddLMemToInstr(MI, Instr, CurOp);
-      else
-        OK &= AddMemToInstr(MI, Instr, CurOp);
-      break;
-
-    case X86II::MRMDestMem:
-      OK &= AddMemToInstr(MI, Instr, CurOp); CurOp += 5;
-      OK &= AddRegToInstr(MI, Instr, CurOp);
-      break;
-
-    default:
-    case X86II::MRMInitReg:
-    case X86II::Pseudo:
-      OK = false;
-      break;
-    }
-
-    if (!OK) {
-      dbgs() << "couldn't convert inst '";
-      MI.dump();
-      dbgs() << "' to machine instr:\n";
-      Instr->dump();
-    }
-
-    InstrEmitter->reset();
-    if (OK)
-      Emit->emitInstruction(*Instr, &Desc);
-    OS << InstrEmitter->str();
-
-    Instr->eraseFromParent();
-  }
-};
-}
-
-// Ok, now you can look.
-MCCodeEmitter *llvm::createX86MCCodeEmitter(const Target &,
-                                            TargetMachine &TM) {
-  return new X86MCCodeEmitter(static_cast<X86TargetMachine&>(TM));
-}
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
index 94dec7c..999a80f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FastISel.cpp
@@ -828,30 +828,30 @@ bool X86FastISel::X86SelectBranch(Instruction *I) {
         std::swap(TrueMBB, FalseMBB);
         Predicate = CmpInst::FCMP_UNE;
         // FALL THROUGH
-      case CmpInst::FCMP_UNE: SwapArgs = false; BranchOpc = X86::JNE; break;
-      case CmpInst::FCMP_OGT: SwapArgs = false; BranchOpc = X86::JA;  break;
-      case CmpInst::FCMP_OGE: SwapArgs = false; BranchOpc = X86::JAE; break;
-      case CmpInst::FCMP_OLT: SwapArgs = true;  BranchOpc = X86::JA;  break;
-      case CmpInst::FCMP_OLE: SwapArgs = true;  BranchOpc = X86::JAE; break;
-      case CmpInst::FCMP_ONE: SwapArgs = false; BranchOpc = X86::JNE; break;
-      case CmpInst::FCMP_ORD: SwapArgs = false; BranchOpc = X86::JNP; break;
-      case CmpInst::FCMP_UNO: SwapArgs = false; BranchOpc = X86::JP;  break;
-      case CmpInst::FCMP_UEQ: SwapArgs = false; BranchOpc = X86::JE;  break;
-      case CmpInst::FCMP_UGT: SwapArgs = true;  BranchOpc = X86::JB;  break;
-      case CmpInst::FCMP_UGE: SwapArgs = true;  BranchOpc = X86::JBE; break;
-      case CmpInst::FCMP_ULT: SwapArgs = false; BranchOpc = X86::JB;  break;
-      case CmpInst::FCMP_ULE: SwapArgs = false; BranchOpc = X86::JBE; break;
+      case CmpInst::FCMP_UNE: SwapArgs = false; BranchOpc = X86::JNE_4; break;
+      case CmpInst::FCMP_OGT: SwapArgs = false; BranchOpc = X86::JA_4;  break;
+      case CmpInst::FCMP_OGE: SwapArgs = false; BranchOpc = X86::JAE_4; break;
+      case CmpInst::FCMP_OLT: SwapArgs = true;  BranchOpc = X86::JA_4;  break;
+      case CmpInst::FCMP_OLE: SwapArgs = true;  BranchOpc = X86::JAE_4; break;
+      case CmpInst::FCMP_ONE: SwapArgs = false; BranchOpc = X86::JNE_4; break;
+      case CmpInst::FCMP_ORD: SwapArgs = false; BranchOpc = X86::JNP_4; break;
+      case CmpInst::FCMP_UNO: SwapArgs = false; BranchOpc = X86::JP_4;  break;
+      case CmpInst::FCMP_UEQ: SwapArgs = false; BranchOpc = X86::JE_4;  break;
+      case CmpInst::FCMP_UGT: SwapArgs = true;  BranchOpc = X86::JB_4;  break;
+      case CmpInst::FCMP_UGE: SwapArgs = true;  BranchOpc = X86::JBE_4; break;
+      case CmpInst::FCMP_ULT: SwapArgs = false; BranchOpc = X86::JB_4;  break;
+      case CmpInst::FCMP_ULE: SwapArgs = false; BranchOpc = X86::JBE_4; break;
           
-      case CmpInst::ICMP_EQ:  SwapArgs = false; BranchOpc = X86::JE;  break;
-      case CmpInst::ICMP_NE:  SwapArgs = false; BranchOpc = X86::JNE; break;
-      case CmpInst::ICMP_UGT: SwapArgs = false; BranchOpc = X86::JA;  break;
-      case CmpInst::ICMP_UGE: SwapArgs = false; BranchOpc = X86::JAE; break;
-      case CmpInst::ICMP_ULT: SwapArgs = false; BranchOpc = X86::JB;  break;
-      case CmpInst::ICMP_ULE: SwapArgs = false; BranchOpc = X86::JBE; break;
-      case CmpInst::ICMP_SGT: SwapArgs = false; BranchOpc = X86::JG;  break;
-      case CmpInst::ICMP_SGE: SwapArgs = false; BranchOpc = X86::JGE; break;
-      case CmpInst::ICMP_SLT: SwapArgs = false; BranchOpc = X86::JL;  break;
-      case CmpInst::ICMP_SLE: SwapArgs = false; BranchOpc = X86::JLE; break;
+      case CmpInst::ICMP_EQ:  SwapArgs = false; BranchOpc = X86::JE_4;  break;
+      case CmpInst::ICMP_NE:  SwapArgs = false; BranchOpc = X86::JNE_4; break;
+      case CmpInst::ICMP_UGT: SwapArgs = false; BranchOpc = X86::JA_4;  break;
+      case CmpInst::ICMP_UGE: SwapArgs = false; BranchOpc = X86::JAE_4; break;
+      case CmpInst::ICMP_ULT: SwapArgs = false; BranchOpc = X86::JB_4;  break;
+      case CmpInst::ICMP_ULE: SwapArgs = false; BranchOpc = X86::JBE_4; break;
+      case CmpInst::ICMP_SGT: SwapArgs = false; BranchOpc = X86::JG_4;  break;
+      case CmpInst::ICMP_SGE: SwapArgs = false; BranchOpc = X86::JGE_4; break;
+      case CmpInst::ICMP_SLT: SwapArgs = false; BranchOpc = X86::JL_4;  break;
+      case CmpInst::ICMP_SLE: SwapArgs = false; BranchOpc = X86::JLE_4; break;
       default:
         return false;
       }
@@ -869,7 +869,7 @@ bool X86FastISel::X86SelectBranch(Instruction *I) {
       if (Predicate == CmpInst::FCMP_UNE) {
         // X86 requires a second branch to handle UNE (and OEQ,
         // which is mapped to UNE above).
-        BuildMI(MBB, DL, TII.get(X86::JP)).addMBB(TrueMBB);
+        BuildMI(MBB, DL, TII.get(X86::JP_4)).addMBB(TrueMBB);
       }
 
       FastEmitBranch(FalseMBB);
@@ -923,7 +923,8 @@ bool X86FastISel::X86SelectBranch(Instruction *I) {
           unsigned OpCode = SetMI->getOpcode();
 
           if (OpCode == X86::SETOr || OpCode == X86::SETBr) {
-            BuildMI(MBB, DL, TII.get(OpCode == X86::SETOr ? X86::JO : X86::JB))
+            BuildMI(MBB, DL, TII.get(OpCode == X86::SETOr ?
+                                        X86::JO_4 : X86::JB_4))
               .addMBB(TrueMBB);
             FastEmitBranch(FalseMBB);
             MBB->addSuccessor(TrueMBB);
@@ -939,7 +940,7 @@ bool X86FastISel::X86SelectBranch(Instruction *I) {
   if (OpReg == 0) return false;
 
   BuildMI(MBB, DL, TII.get(X86::TEST8rr)).addReg(OpReg).addReg(OpReg);
-  BuildMI(MBB, DL, TII.get(X86::JNE)).addMBB(TrueMBB);
+  BuildMI(MBB, DL, TII.get(X86::JNE_4)).addMBB(TrueMBB);
   FastEmitBranch(FalseMBB);
   MBB->addSuccessor(TrueMBB);
   return true;
@@ -1012,7 +1013,7 @@ bool X86FastISel::X86SelectShift(Instruction *I) {
   // of X86::CL, emit an EXTRACT_SUBREG to precisely describe what
   // we're doing here.
   if (CReg != X86::CL)
-    BuildMI(MBB, DL, TII.get(TargetInstrInfo::EXTRACT_SUBREG), X86::CL)
+    BuildMI(MBB, DL, TII.get(TargetOpcode::EXTRACT_SUBREG), X86::CL)
       .addReg(CReg).addImm(X86::SUBREG_8BIT);
 
   unsigned ResultReg = createResultReg(RC);
@@ -1156,9 +1157,10 @@ bool X86FastISel::X86VisitIntrinsicCall(IntrinsicInst &I) {
   case Intrinsic::dbg_declare: {
     DbgDeclareInst *DI = cast<DbgDeclareInst>(&I);
     X86AddressMode AM;
+    assert(DI->getAddress() && "Null address should be checked earlier!");
     if (!X86SelectAddress(DI->getAddress(), AM))
       return false;
-    const TargetInstrDesc &II = TII.get(TargetInstrInfo::DEBUG_VALUE);
+    const TargetInstrDesc &II = TII.get(TargetOpcode::DBG_VALUE);
     addFullAddress(BuildMI(MBB, DL, II), AM).addImm(0).
                                         addMetadata(DI->getVariable());
     return true;
@@ -1246,7 +1248,7 @@ bool X86FastISel::X86SelectCall(Instruction *I) {
 
   // fastcc with -tailcallopt is intended to provide a guaranteed
   // tail call optimization. Fastisel doesn't know how to do that.
-  if (CC == CallingConv::Fast && PerformTailCallOpt)
+  if (CC == CallingConv::Fast && GuaranteedTailCallOpt)
     return false;
 
   // Let SDISel handle vararg functions.
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FixupKinds.h b/libclamav/c++/llvm/lib/Target/X86/X86FixupKinds.h
new file mode 100644
index 0000000..c8dac3c
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FixupKinds.h
@@ -0,0 +1,25 @@
+//===-- X86/X86FixupKinds.h - X86 Specific Fixup Entries --------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLVM_X86_X86FIXUPKINDS_H
+#define LLVM_X86_X86FIXUPKINDS_H
+
+#include "llvm/MC/MCFixup.h"
+
+namespace llvm {
+namespace X86 {
+enum Fixups {
+  reloc_pcrel_4byte = FirstTargetFixupKind,  // 32-bit pcrel, e.g. a branch.
+  reloc_pcrel_1byte,                         // 8-bit pcrel, e.g. branch_1
+  reloc_riprel_4byte                         // 32-bit rip-relative
+};
+}
+}
+
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
index 503ac14..6d6fe77 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86FloatingPoint.cpp
@@ -235,7 +235,7 @@ bool FPS::processBasicBlock(MachineFunction &MF, MachineBasicBlock &BB) {
     unsigned Flags = MI->getDesc().TSFlags;
     
     unsigned FPInstClass = Flags & X86II::FPTypeMask;
-    if (MI->getOpcode() == TargetInstrInfo::INLINEASM)
+    if (MI->isInlineAsm())
       FPInstClass = X86II::SpecialFP;
     
     if (FPInstClass == X86II::NotFP)
@@ -1083,7 +1083,7 @@ void FPS::handleSpecialFP(MachineBasicBlock::iterator &I) {
     }
     }
     break;
-  case TargetInstrInfo::INLINEASM: {
+  case TargetOpcode::INLINEASM: {
     // The inline asm MachineInstr currently only *uses* FP registers for the
     // 'f' constraint.  These should be turned into the current ST(x) register
     // in the machine instr.  Also, any kills should be explicitly popped after
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
index 91e0483..a23ab91 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelDAGToDAG.cpp
@@ -1606,7 +1606,7 @@ SDNode *X86DAGToDAGISel::SelectAtomicLoadAdd(SDNode *Node, EVT NVT) {
   }
 
   DebugLoc dl = Node->getDebugLoc();
-  SDValue Undef = SDValue(CurDAG->getMachineNode(TargetInstrInfo::IMPLICIT_DEF,
+  SDValue Undef = SDValue(CurDAG->getMachineNode(TargetOpcode::IMPLICIT_DEF,
                                                  dl, NVT), 0);
   MachineSDNode::mmo_iterator MemOp = MF->allocateMemRefsArray(1);
   MemOp[0] = cast<MemSDNode>(Node)->getMemOperand();
@@ -1652,8 +1652,8 @@ static bool HasNoSignedComparisonUses(SDNode *N) {
       case X86::SETEr: case X86::SETNEr: case X86::SETPr: case X86::SETNPr:
       case X86::SETAm: case X86::SETAEm: case X86::SETBm: case X86::SETBEm:
       case X86::SETEm: case X86::SETNEm: case X86::SETPm: case X86::SETNPm:
-      case X86::JA: case X86::JAE: case X86::JB: case X86::JBE:
-      case X86::JE: case X86::JNE: case X86::JP: case X86::JNP:
+      case X86::JA_4: case X86::JAE_4: case X86::JB_4: case X86::JBE_4:
+      case X86::JE_4: case X86::JNE_4: case X86::JP_4: case X86::JNP_4:
       case X86::CMOVA16rr: case X86::CMOVA16rm:
       case X86::CMOVA32rr: case X86::CMOVA32rm:
       case X86::CMOVA64rr: case X86::CMOVA64rm:
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
index ce2032b..a644e5e 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -12,9 +12,11 @@
 //
 //===----------------------------------------------------------------------===//
 
+#define DEBUG_TYPE "x86-isel"
 #include "X86.h"
 #include "X86InstrBuilder.h"
 #include "X86ISelLowering.h"
+#include "X86MCTargetExpr.h"
 #include "X86TargetMachine.h"
 #include "X86TargetObjectFile.h"
 #include "llvm/CallingConv.h"
@@ -35,11 +37,10 @@
 #include "llvm/CodeGen/PseudoSourceValue.h"
 #include "llvm/MC/MCAsmInfo.h"
 #include "llvm/MC/MCContext.h"
-#include "llvm/MC/MCExpr.h"
 #include "llvm/MC/MCSymbol.h"
-#include "llvm/Target/TargetOptions.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/SmallSet.h"
+#include "llvm/ADT/Statistic.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/VectorExtras.h"
 #include "llvm/Support/CommandLine.h"
@@ -49,6 +50,8 @@
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
+STATISTIC(NumTailCalls, "Number of tail calls");
+
 static cl::opt<bool>
 DisableMMX("disable-mmx", cl::Hidden, cl::desc("Disable use of MMX"));
 
@@ -998,19 +1001,6 @@ X86TargetLowering::X86TargetLowering(X86TargetMachine &TM)
 
   computeRegisterProperties();
 
-  // Divide and reminder operations have no vector equivalent and can
-  // trap. Do a custom widening for these operations in which we never
-  // generate more divides/remainder than the original vector width.
-  for (unsigned VT = (unsigned)MVT::FIRST_VECTOR_VALUETYPE;
-       VT <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++VT) {
-    if (!isTypeLegal((MVT::SimpleValueType)VT)) {
-      setOperationAction(ISD::SDIV, (MVT::SimpleValueType) VT, Custom);
-      setOperationAction(ISD::UDIV, (MVT::SimpleValueType) VT, Custom);
-      setOperationAction(ISD::SREM, (MVT::SimpleValueType) VT, Custom);
-      setOperationAction(ISD::UREM, (MVT::SimpleValueType) VT, Custom);
-    }
-  }
-
   // FIXME: These should be based on subtarget info. Plus, the values should
   // be smaller when we are in optimizing for size mode.
   maxStoresPerMemset = 16; // For @llvm.memset -> sequence of stores
@@ -1127,10 +1117,8 @@ X86TargetLowering::LowerCustomJumpTableEntry(const MachineJumpTableInfo *MJTI,
          Subtarget->isPICStyleGOT());
   // In 32-bit ELF systems, our jump table entries are formed with @GOTOFF
   // entries.
-
-  // FIXME: @GOTOFF should be a property of MCSymbolRefExpr not in the MCSymbol.
-  std::string Name = MBB->getSymbol(Ctx)->getName() + "@GOTOFF";
-  return MCSymbolRefExpr::Create(Ctx.GetOrCreateSymbol(StringRef(Name)), Ctx);
+  return X86MCTargetExpr::Create(MBB->getSymbol(Ctx),
+                                 X86MCTargetExpr::GOTOFF, Ctx);
 }
 
 /// getPICJumpTableRelocaBase - Returns relocation base for the given PIC
@@ -1192,13 +1180,11 @@ X86TargetLowering::LowerReturn(SDValue Chain,
                  RVLocs, *DAG.getContext());
   CCInfo.AnalyzeReturn(Outs, RetCC_X86);
 
-  // If this is the first return lowered for this function, add the regs to the
-  // liveout set for the function.
-  if (DAG.getMachineFunction().getRegInfo().liveout_empty()) {
-    for (unsigned i = 0; i != RVLocs.size(); ++i)
-      if (RVLocs[i].isRegLoc())
-        DAG.getMachineFunction().getRegInfo().addLiveOut(RVLocs[i].getLocReg());
-  }
+  // Add the regs to the liveout set for the function.
+  MachineRegisterInfo &MRI = DAG.getMachineFunction().getRegInfo();
+  for (unsigned i = 0; i != RVLocs.size(); ++i)
+    if (RVLocs[i].isRegLoc() && !MRI.isLiveOut(RVLocs[i].getLocReg()))
+      MRI.addLiveOut(RVLocs[i].getLocReg());
 
   SDValue Flag;
 
@@ -1251,7 +1237,7 @@ X86TargetLowering::LowerReturn(SDValue Chain,
     X86MachineFunctionInfo *FuncInfo = MF.getInfo<X86MachineFunctionInfo>();
     unsigned Reg = FuncInfo->getSRetReturnReg();
     if (!Reg) {
-      Reg = MF.getRegInfo().createVirtualRegister(getRegClassFor(MVT::i64));
+      Reg = MRI.createVirtualRegister(getRegClassFor(MVT::i64));
       FuncInfo->setSRetReturnReg(Reg);
     }
     SDValue Val = DAG.getCopyFromReg(Chain, dl, Reg, getPointerTy());
@@ -1260,7 +1246,7 @@ X86TargetLowering::LowerReturn(SDValue Chain,
     Flag = Chain.getValue(1);
 
     // RAX now acts like a return value.
-    MF.getRegInfo().addLiveOut(X86::RAX);
+    MRI.addLiveOut(X86::RAX);
   }
 
   RetOps[0] = Chain;  // Update chain.
@@ -1390,7 +1376,7 @@ bool X86TargetLowering::IsCalleePop(bool IsVarArg, CallingConv::ID CallingConv){
   case CallingConv::X86_FastCall:
     return !Subtarget->is64Bit();
   case CallingConv::Fast:
-    return PerformTailCallOpt;
+    return GuaranteedTailCallOpt;
   }
 }
 
@@ -1412,18 +1398,6 @@ CCAssignFn *X86TargetLowering::CCAssignFnForNode(CallingConv::ID CC) const {
     return CC_X86_32_C;
 }
 
-/// NameDecorationForCallConv - Selects the appropriate decoration to
-/// apply to a MachineFunction containing a given calling convention.
-NameDecorationStyle
-X86TargetLowering::NameDecorationForCallConv(CallingConv::ID CallConv) {
-  if (CallConv == CallingConv::X86_FastCall)
-    return FastCall;
-  else if (CallConv == CallingConv::X86_StdCall)
-    return StdCall;
-  return None;
-}
-
-
 /// CreateCopyOfByValArgument - Make a copy of an aggregate at address specified
 /// by "Src" to address "Dst" with size and alignment information specified by
 /// the specific parameter attribute. The copy will be passed as a byval
@@ -1437,6 +1411,12 @@ CreateCopyOfByValArgument(SDValue Src, SDValue Dst, SDValue Chain,
                        /*AlwaysInline=*/true, NULL, 0, NULL, 0);
 }
 
+/// FuncIsMadeTailCallSafe - Return true if the function is being made into
+/// a tailcall target by changing its ABI.
+static bool FuncIsMadeTailCallSafe(CallingConv::ID CC) {
+  return GuaranteedTailCallOpt && CC == CallingConv::Fast;
+}
+
 SDValue
 X86TargetLowering::LowerMemArgument(SDValue Chain,
                                     CallingConv::ID CallConv,
@@ -1445,10 +1425,9 @@ X86TargetLowering::LowerMemArgument(SDValue Chain,
                                     const CCValAssign &VA,
                                     MachineFrameInfo *MFI,
                                     unsigned i) {
-
   // Create the nodes corresponding to a load from this parameter slot.
   ISD::ArgFlagsTy Flags = Ins[i].Flags;
-  bool AlwaysUseMutable = (CallConv==CallingConv::Fast) && PerformTailCallOpt;
+  bool AlwaysUseMutable = FuncIsMadeTailCallSafe(CallConv);
   bool isImmutable = !AlwaysUseMutable && !Flags.isByVal();
   EVT ValVT;
 
@@ -1463,13 +1442,17 @@ X86TargetLowering::LowerMemArgument(SDValue Chain,
   // changed with more analysis.
   // In case of tail call optimization mark all arguments mutable. Since they
   // could be overwritten by lowering of arguments in case of a tail call.
-  int FI = MFI->CreateFixedObject(ValVT.getSizeInBits()/8,
-                                  VA.getLocMemOffset(), isImmutable, false);
-  SDValue FIN = DAG.getFrameIndex(FI, getPointerTy());
-  if (Flags.isByVal())
-    return FIN;
-  return DAG.getLoad(ValVT, dl, Chain, FIN,
-                     PseudoSourceValue::getFixedStack(FI), 0);
+  if (Flags.isByVal()) {
+    int FI = MFI->CreateFixedObject(Flags.getByValSize(),
+                                    VA.getLocMemOffset(), isImmutable, false);
+    return DAG.getFrameIndex(FI, getPointerTy());
+  } else {
+    int FI = MFI->CreateFixedObject(ValVT.getSizeInBits()/8,
+                                    VA.getLocMemOffset(), isImmutable, false);
+    SDValue FIN = DAG.getFrameIndex(FI, getPointerTy());
+    return DAG.getLoad(ValVT, dl, Chain, FIN,
+                       PseudoSourceValue::getFixedStack(FI), 0);
+  }
 }
 
 SDValue
@@ -1490,9 +1473,6 @@ X86TargetLowering::LowerFormalArguments(SDValue Chain,
       Fn->getName() == "main")
     FuncInfo->setForceFramePointer(true);
 
-  // Decorate the function name.
-  FuncInfo->setDecorationStyle(NameDecorationForCallConv(CallConv));
-
   MachineFrameInfo *MFI = MF.getFrameInfo();
   bool Is64Bit = Subtarget->is64Bit();
   bool IsWin64 = Subtarget->isTargetWin64();
@@ -1585,8 +1565,8 @@ X86TargetLowering::LowerFormalArguments(SDValue Chain,
   }
 
   unsigned StackSize = CCInfo.getNextStackOffset();
-  // align stack specially for tail calls
-  if (PerformTailCallOpt && CallConv == CallingConv::Fast)
+  // Align stack specially for tail calls.
+  if (FuncIsMadeTailCallSafe(CallConv))
     StackSize = GetAlignedArgumentStackSize(StackSize, DAG);
 
   // If the function takes variable number of arguments, make a frame index for
@@ -1697,13 +1677,11 @@ X86TargetLowering::LowerFormalArguments(SDValue Chain,
   // Some CCs need callee pop.
   if (IsCalleePop(isVarArg, CallConv)) {
     BytesToPopOnReturn  = StackSize; // Callee pops everything.
-    BytesCallerReserves = 0;
   } else {
     BytesToPopOnReturn  = 0; // Callee pops nothing.
     // If this is an sret function, the return should pop the hidden pointer.
     if (!Is64Bit && CallConv != CallingConv::Fast && ArgsAreStructReturn(Ins))
       BytesToPopOnReturn = 4;
-    BytesCallerReserves = StackSize;
   }
 
   if (!Is64Bit) {
@@ -1738,14 +1716,9 @@ X86TargetLowering::LowerMemOpCallTo(SDValue Chain,
 /// optimization is performed and it is required.
 SDValue
 X86TargetLowering::EmitTailCallLoadRetAddr(SelectionDAG &DAG,
-                                           SDValue &OutRetAddr,
-                                           SDValue Chain,
-                                           bool IsTailCall,
-                                           bool Is64Bit,
-                                           int FPDiff,
-                                           DebugLoc dl) {
-  if (!IsTailCall || FPDiff==0) return Chain;
-
+                                           SDValue &OutRetAddr, SDValue Chain,
+                                           bool IsTailCall, bool Is64Bit,
+                                           int FPDiff, DebugLoc dl) {
   // Adjust the Return address stack slot.
   EVT VT = getPointerTy();
   OutRetAddr = getReturnAddressFrameIndex(DAG);
@@ -1766,8 +1739,7 @@ EmitTailCallStoreRetAddr(SelectionDAG & DAG, MachineFunction &MF,
   // Calculate the new stack slot for the return address.
   int SlotSize = Is64Bit ? 8 : 4;
   int NewReturnAddrFI =
-    MF.getFrameInfo()->CreateFixedObject(SlotSize, FPDiff-SlotSize,
-                                         true, false);
+    MF.getFrameInfo()->CreateFixedObject(SlotSize, FPDiff-SlotSize, true,false);
   EVT VT = Is64Bit ? MVT::i64 : MVT::i32;
   SDValue NewRetAddrFrIdx = DAG.getFrameIndex(NewReturnAddrFI, VT);
   Chain = DAG.getStore(Chain, dl, RetAddrFrIdx, NewRetAddrFrIdx,
@@ -1778,19 +1750,30 @@ EmitTailCallStoreRetAddr(SelectionDAG & DAG, MachineFunction &MF,
 SDValue
 X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
                              CallingConv::ID CallConv, bool isVarArg,
-                             bool isTailCall,
+                             bool &isTailCall,
                              const SmallVectorImpl<ISD::OutputArg> &Outs,
                              const SmallVectorImpl<ISD::InputArg> &Ins,
                              DebugLoc dl, SelectionDAG &DAG,
                              SmallVectorImpl<SDValue> &InVals) {
-
   MachineFunction &MF = DAG.getMachineFunction();
   bool Is64Bit        = Subtarget->is64Bit();
   bool IsStructRet    = CallIsStructReturn(Outs);
+  bool IsSibcall      = false;
+
+  if (isTailCall) {
+    // Check if it's really possible to do a tail call.
+    isTailCall = IsEligibleForTailCallOptimization(Callee, CallConv, isVarArg,
+                                                   Outs, Ins, DAG);
+
+    // Sibcalls are automatically detected tailcalls which do not require
+    // ABI changes.
+    if (!GuaranteedTailCallOpt && isTailCall)
+      IsSibcall = true;
+
+    if (isTailCall)
+      ++NumTailCalls;
+  }
 
-  assert((!isTailCall ||
-          (CallConv == CallingConv::Fast && PerformTailCallOpt)) &&
-         "IsEligibleForTailCallOptimization missed a case!");
   assert(!(isVarArg && CallConv == CallingConv::Fast) &&
          "Var args not supported with calling convention fastcc");
 
@@ -1802,11 +1785,15 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
 
   // Get a count of how many bytes are to be pushed on the stack.
   unsigned NumBytes = CCInfo.getNextStackOffset();
-  if (PerformTailCallOpt && CallConv == CallingConv::Fast)
+  if (IsSibcall)
+    // This is a sibcall. The memory operands are available in caller's
+    // own caller's stack.
+    NumBytes = 0;
+  else if (GuaranteedTailCallOpt && CallConv == CallingConv::Fast)
     NumBytes = GetAlignedArgumentStackSize(NumBytes, DAG);
 
   int FPDiff = 0;
-  if (isTailCall) {
+  if (isTailCall && !IsSibcall) {
     // Lower arguments at fp - stackoffset + fpdiff.
     unsigned NumBytesCallerPushed =
       MF.getInfo<X86MachineFunctionInfo>()->getBytesToPopOnReturn();
@@ -1818,12 +1805,14 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
       MF.getInfo<X86MachineFunctionInfo>()->setTCReturnAddrDelta(FPDiff);
   }
 
-  Chain = DAG.getCALLSEQ_START(Chain, DAG.getIntPtrConstant(NumBytes, true));
+  if (!IsSibcall)
+    Chain = DAG.getCALLSEQ_START(Chain, DAG.getIntPtrConstant(NumBytes, true));
 
   SDValue RetAddrFrIdx;
   // Load return adress for tail calls.
-  Chain = EmitTailCallLoadRetAddr(DAG, RetAddrFrIdx, Chain, isTailCall, Is64Bit,
-                                  FPDiff, dl);
+  if (isTailCall && FPDiff)
+    Chain = EmitTailCallLoadRetAddr(DAG, RetAddrFrIdx, Chain, isTailCall,
+                                    Is64Bit, FPDiff, dl);
 
   SmallVector<std::pair<unsigned, SDValue>, 8> RegsToPass;
   SmallVector<SDValue, 8> MemOpChains;
@@ -1873,15 +1862,12 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
 
     if (VA.isRegLoc()) {
       RegsToPass.push_back(std::make_pair(VA.getLocReg(), Arg));
-    } else {
-      if (!isTailCall || (isTailCall && isByVal)) {
-        assert(VA.isMemLoc());
-        if (StackPtr.getNode() == 0)
-          StackPtr = DAG.getCopyFromReg(Chain, dl, X86StackPtr, getPointerTy());
-
-        MemOpChains.push_back(LowerMemOpCallTo(Chain, StackPtr, Arg,
-                                               dl, DAG, VA, Flags));
-      }
+    } else if (!IsSibcall && (!isTailCall || isByVal)) {
+      assert(VA.isMemLoc());
+      if (StackPtr.getNode() == 0)
+        StackPtr = DAG.getCopyFromReg(Chain, dl, X86StackPtr, getPointerTy());
+      MemOpChains.push_back(LowerMemOpCallTo(Chain, StackPtr, Arg,
+                                             dl, DAG, VA, Flags));
     }
   }
 
@@ -1901,7 +1887,6 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
       InFlag = Chain.getValue(1);
     }
 
-
   if (Subtarget->isPICStyleGOT()) {
     // ELF / PIC requires GOT in the EBX register before function calls via PLT
     // GOT pointer.
@@ -1971,9 +1956,11 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
     int FI = 0;
     // Do not flag preceeding copytoreg stuff together with the following stuff.
     InFlag = SDValue();
-    for (unsigned i = 0, e = ArgLocs.size(); i != e; ++i) {
-      CCValAssign &VA = ArgLocs[i];
-      if (!VA.isRegLoc()) {
+    if (GuaranteedTailCallOpt) {
+      for (unsigned i = 0, e = ArgLocs.size(); i != e; ++i) {
+        CCValAssign &VA = ArgLocs[i];
+        if (VA.isRegLoc())
+          continue;
         assert(VA.isMemLoc());
         SDValue Arg = Outs[i].Val;
         ISD::ArgFlagsTy Flags = Outs[i].Flags;
@@ -2081,21 +2068,22 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
   }
 
   if (isTailCall && !WasGlobalOrExternal) {
-    unsigned Opc = Is64Bit ? X86::R11 : X86::EAX;
-
+    // Force the address into a (call preserved) caller-saved register since
+    // tailcall must happen after callee-saved registers are poped.
+    // FIXME: Give it a special register class that contains caller-saved
+    // register instead?
+    unsigned TCReg = Is64Bit ? X86::R11 : X86::EAX;
     Chain = DAG.getCopyToReg(Chain,  dl,
-                             DAG.getRegister(Opc, getPointerTy()),
+                             DAG.getRegister(TCReg, getPointerTy()),
                              Callee,InFlag);
-    Callee = DAG.getRegister(Opc, getPointerTy());
-    // Add register as live out.
-    MF.getRegInfo().addLiveOut(Opc);
+    Callee = DAG.getRegister(TCReg, getPointerTy());
   }
 
   // Returns a chain & a flag for retval copy to use.
   SDVTList NodeTys = DAG.getVTList(MVT::Other, MVT::Flag);
   SmallVector<SDValue, 8> Ops;
 
-  if (isTailCall) {
+  if (!IsSibcall && isTailCall) {
     Chain = DAG.getCALLSEQ_END(Chain, DAG.getIntPtrConstant(NumBytes, true),
                            DAG.getIntPtrConstant(0, true), InFlag);
     InFlag = Chain.getValue(1);
@@ -2156,7 +2144,7 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
   if (IsCalleePop(isVarArg, CallConv))
     NumBytesForCalleeToPush = NumBytes;    // Callee pops everything
   else if (!Is64Bit && CallConv != CallingConv::Fast && IsStructRet)
-    // If this is is a call to a struct-return function, the callee
+    // If this is a call to a struct-return function, the callee
     // pops the hidden struct pointer, so we have to push it back.
     // This is common for Darwin/X86, Linux & Mingw32 targets.
     NumBytesForCalleeToPush = 4;
@@ -2164,12 +2152,14 @@ X86TargetLowering::LowerCall(SDValue Chain, SDValue Callee,
     NumBytesForCalleeToPush = 0;  // Callee pops nothing.
 
   // Returns a flag for retval copy to use.
-  Chain = DAG.getCALLSEQ_END(Chain,
-                             DAG.getIntPtrConstant(NumBytes, true),
-                             DAG.getIntPtrConstant(NumBytesForCalleeToPush,
-                                                   true),
-                             InFlag);
-  InFlag = Chain.getValue(1);
+  if (!IsSibcall) {
+    Chain = DAG.getCALLSEQ_END(Chain,
+                               DAG.getIntPtrConstant(NumBytes, true),
+                               DAG.getIntPtrConstant(NumBytesForCalleeToPush,
+                                                     true),
+                               InFlag);
+    InFlag = Chain.getValue(1);
+  }
 
   // Handle result values, copying them out of physregs into vregs that we
   // return.
@@ -2231,6 +2221,50 @@ unsigned X86TargetLowering::GetAlignedArgumentStackSize(unsigned StackSize,
   return Offset;
 }
 
+/// MatchingStackOffset - Return true if the given stack call argument is
+/// already available in the same position (relatively) of the caller's
+/// incoming argument stack.
+static
+bool MatchingStackOffset(SDValue Arg, unsigned Offset, ISD::ArgFlagsTy Flags,
+                         MachineFrameInfo *MFI, const MachineRegisterInfo *MRI,
+                         const X86InstrInfo *TII) {
+  int FI;
+  if (Arg.getOpcode() == ISD::CopyFromReg) {
+    unsigned VR = cast<RegisterSDNode>(Arg.getOperand(1))->getReg();
+    if (!VR || TargetRegisterInfo::isPhysicalRegister(VR))
+      return false;
+    MachineInstr *Def = MRI->getVRegDef(VR);
+    if (!Def)
+      return false;
+    if (!Flags.isByVal()) {
+      if (!TII->isLoadFromStackSlot(Def, FI))
+        return false;
+    } else {
+      unsigned Opcode = Def->getOpcode();
+      if ((Opcode == X86::LEA32r || Opcode == X86::LEA64r) &&
+          Def->getOperand(1).isFI()) {
+        FI = Def->getOperand(1).getIndex();
+        if (MFI->getObjectSize(FI) != Flags.getByValSize())
+          return false;
+      } else
+        return false;
+    }
+  } else {
+    LoadSDNode *Ld = dyn_cast<LoadSDNode>(Arg);
+    if (!Ld)
+      return false;
+    SDValue Ptr = Ld->getBasePtr();
+    FrameIndexSDNode *FINode = dyn_cast<FrameIndexSDNode>(Ptr);
+    if (!FINode)
+      return false;
+    FI = FINode->getIndex();
+  }
+
+  if (!MFI->isFixedObjectIndex(FI))
+    return false;
+  return Offset == MFI->getObjectOffset(FI);
+}
+
 /// IsEligibleForTailCallOptimization - Check whether the call is eligible
 /// for tail call optimization. Targets which want to do tail call
 /// optimization should implement this function.
@@ -2238,23 +2272,79 @@ bool
 X86TargetLowering::IsEligibleForTailCallOptimization(SDValue Callee,
                                                      CallingConv::ID CalleeCC,
                                                      bool isVarArg,
-                                      const SmallVectorImpl<ISD::InputArg> &Ins,
+                                    const SmallVectorImpl<ISD::OutputArg> &Outs,
+                                    const SmallVectorImpl<ISD::InputArg> &Ins,
                                                      SelectionDAG& DAG) const {
-  MachineFunction &MF = DAG.getMachineFunction();
-  CallingConv::ID CallerCC = MF.getFunction()->getCallingConv();
-  return CalleeCC == CallingConv::Fast && CallerCC == CalleeCC;
+  if (CalleeCC != CallingConv::Fast &&
+      CalleeCC != CallingConv::C)
+    return false;
+
+  // If -tailcallopt is specified, make fastcc functions tail-callable.
+  const Function *CallerF = DAG.getMachineFunction().getFunction();
+  if (GuaranteedTailCallOpt) {
+    if (CalleeCC == CallingConv::Fast &&
+        CallerF->getCallingConv() == CalleeCC)
+      return true;
+    return false;
+  }
+
+  // Look for obvious safe cases to perform tail call optimization that does not
+  // requite ABI changes. This is what gcc calls sibcall.
+
+  // Do not tail call optimize vararg calls for now.
+  if (isVarArg)
+    return false;
+
+  // If the callee takes no arguments then go on to check the results of the
+  // call.
+  if (!Outs.empty()) {
+    // Check if stack adjustment is needed. For now, do not do this if any
+    // argument is passed on the stack.
+    SmallVector<CCValAssign, 16> ArgLocs;
+    CCState CCInfo(CalleeCC, isVarArg, getTargetMachine(),
+                   ArgLocs, *DAG.getContext());
+    CCInfo.AnalyzeCallOperands(Outs, CCAssignFnForNode(CalleeCC));
+    if (CCInfo.getNextStackOffset()) {
+      MachineFunction &MF = DAG.getMachineFunction();
+      if (MF.getInfo<X86MachineFunctionInfo>()->getBytesToPopOnReturn())
+        return false;
+      if (Subtarget->isTargetWin64())
+        // Win64 ABI has additional complications.
+        return false;
+
+      // Check if the arguments are already laid out in the right way as
+      // the caller's fixed stack objects.
+      MachineFrameInfo *MFI = MF.getFrameInfo();
+      const MachineRegisterInfo *MRI = &MF.getRegInfo();
+      const X86InstrInfo *TII =
+        ((X86TargetMachine&)getTargetMachine()).getInstrInfo();
+      for (unsigned i = 0, e = ArgLocs.size(); i != e; ++i) {
+        CCValAssign &VA = ArgLocs[i];
+        EVT RegVT = VA.getLocVT();
+        SDValue Arg = Outs[i].Val;
+        ISD::ArgFlagsTy Flags = Outs[i].Flags;
+        if (VA.getLocInfo() == CCValAssign::Indirect)
+          return false;
+        if (!VA.isRegLoc()) {
+          if (!MatchingStackOffset(Arg, VA.getLocMemOffset(), Flags,
+                                   MFI, MRI, TII))
+            return false;
+        }
+      }
+    }
+  }
+
+  return true;
 }
 
 FastISel *
-X86TargetLowering::createFastISel(MachineFunction &mf,
-                                  MachineModuleInfo *mmo,
-                                  DwarfWriter *dw,
-                                  DenseMap<const Value *, unsigned> &vm,
-                                  DenseMap<const BasicBlock *,
-                                           MachineBasicBlock *> &bm,
-                                  DenseMap<const AllocaInst *, int> &am
+X86TargetLowering::createFastISel(MachineFunction &mf, MachineModuleInfo *mmo,
+                            DwarfWriter *dw,
+                            DenseMap<const Value *, unsigned> &vm,
+                            DenseMap<const BasicBlock*, MachineBasicBlock*> &bm,
+                            DenseMap<const AllocaInst *, int> &am
 #ifndef NDEBUG
-                                  , SmallSet<Instruction*, 8> &cil
+                          , SmallSet<Instruction*, 8> &cil
 #endif
                                   ) {
   return X86::createFastISel(mf, mmo, dw, vm, bm, am
@@ -6022,7 +6112,7 @@ SDValue X86TargetLowering::LowerSELECT(SDValue Op, SelectionDAG &DAG) {
           N2C && N2C->isNullValue() &&
           RHSC && RHSC->isNullValue()) {
         SDValue CmpOp0 = Cmp.getOperand(0);
-        Cmp = DAG.getNode(X86ISD::CMP, dl, Op.getValueType(),
+        Cmp = DAG.getNode(X86ISD::CMP, dl, CmpOp0.getValueType(),
                           CmpOp0, DAG.getConstant(1, CmpOp0.getValueType()));
         return DAG.getNode(X86ISD::SETCC_CARRY, dl, Op.getValueType(),
                            DAG.getConstant(X86::COND_B, MVT::i8), Cmp);
@@ -6910,16 +7000,12 @@ SDValue X86TargetLowering::LowerTRAMPOLINE(SDValue Op,
 
   const Value *TrmpAddr = cast<SrcValueSDNode>(Op.getOperand(4))->getValue();
 
-  const X86InstrInfo *TII =
-    ((X86TargetMachine&)getTargetMachine()).getInstrInfo();
-
   if (Subtarget->is64Bit()) {
     SDValue OutChains[6];
 
     // Large code-model.
-
-    const unsigned char JMP64r  = TII->getBaseOpcodeFor(X86::JMP64r);
-    const unsigned char MOV64ri = TII->getBaseOpcodeFor(X86::MOV64ri);
+    const unsigned char JMP64r  = 0xFF; // 64-bit jmp through register opcode.
+    const unsigned char MOV64ri = 0xB8; // X86::MOV64ri opcode.
 
     const unsigned char N86R10 = RegInfo->getX86RegNum(X86::R10);
     const unsigned char N86R11 = RegInfo->getX86RegNum(X86::R11);
@@ -7014,7 +7100,8 @@ SDValue X86TargetLowering::LowerTRAMPOLINE(SDValue Op,
                        DAG.getConstant(10, MVT::i32));
     Disp = DAG.getNode(ISD::SUB, dl, MVT::i32, FPtr, Addr);
 
-    const unsigned char MOV32ri = TII->getBaseOpcodeFor(X86::MOV32ri);
+    // This is storing the opcode for MOV32ri.
+    const unsigned char MOV32ri = 0xB8; // X86::MOV32ri's opcode byte.
     const unsigned char N86Reg = RegInfo->getX86RegNum(NestReg);
     OutChains[0] = DAG.getStore(Root, dl,
                                 DAG.getConstant(MOV32ri|N86Reg, MVT::i8),
@@ -7024,7 +7111,7 @@ SDValue X86TargetLowering::LowerTRAMPOLINE(SDValue Op,
                        DAG.getConstant(1, MVT::i32));
     OutChains[1] = DAG.getStore(Root, dl, Nest, Addr, TrmpAddr, 1, false, 1);
 
-    const unsigned char JMP = TII->getBaseOpcodeFor(X86::JMP);
+    const unsigned char JMP = 0xE9; // jmp <32bit dst> opcode.
     Addr = DAG.getNode(ISD::ADD, dl, MVT::i32, Trmp,
                        DAG.getConstant(5, MVT::i32));
     OutChains[2] = DAG.getStore(Root, dl, DAG.getConstant(JMP, MVT::i8), Addr,
@@ -7457,14 +7544,6 @@ void X86TargetLowering::ReplaceNodeResults(SDNode *N,
     Results.push_back(edx.getValue(1));
     return;
   }
-  case ISD::SDIV:
-  case ISD::UDIV:
-  case ISD::SREM:
-  case ISD::UREM: {
-    EVT WidenVT = getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
-    Results.push_back(DAG.UnrollVectorOp(N, WidenVT.getVectorNumElements()));
-    return;
-  }
   case ISD::ATOMIC_CMP_SWAP: {
     EVT T = N->getValueType(0);
     assert (T == MVT::i64 && "Only know how to expand i64 Cmp and Swap");
@@ -7840,7 +7919,7 @@ X86TargetLowering::EmitAtomicBitwiseWithCustomInserter(MachineInstr *bInstr,
   MIB.addReg(EAXreg);
 
   // insert branch
-  BuildMI(newMBB, dl, TII->get(X86::JNE)).addMBB(newMBB);
+  BuildMI(newMBB, dl, TII->get(X86::JNE_4)).addMBB(newMBB);
 
   F->DeleteMachineInstr(bInstr);   // The pseudo instruction is gone now.
   return nextMBB;
@@ -7997,7 +8076,7 @@ X86TargetLowering::EmitAtomicBit6432WithCustomInserter(MachineInstr *bInstr,
   MIB.addReg(X86::EDX);
 
   // insert branch
-  BuildMI(newMBB, dl, TII->get(X86::JNE)).addMBB(newMBB);
+  BuildMI(newMBB, dl, TII->get(X86::JNE_4)).addMBB(newMBB);
 
   F->DeleteMachineInstr(bInstr);   // The pseudo instruction is gone now.
   return nextMBB;
@@ -8100,7 +8179,7 @@ X86TargetLowering::EmitAtomicMinMaxWithCustomInserter(MachineInstr *mInstr,
   MIB.addReg(X86::EAX);
 
   // insert branch
-  BuildMI(newMBB, dl, TII->get(X86::JNE)).addMBB(newMBB);
+  BuildMI(newMBB, dl, TII->get(X86::JNE_4)).addMBB(newMBB);
 
   F->DeleteMachineInstr(mInstr);   // The pseudo instruction is gone now.
   return nextMBB;
@@ -8182,7 +8261,7 @@ X86TargetLowering::EmitVAStartSaveXMMRegsWithCustomInserter(
   if (!Subtarget->isTargetWin64()) {
     // If %al is 0, branch around the XMM save block.
     BuildMI(MBB, DL, TII->get(X86::TEST8rr)).addReg(CountReg).addReg(CountReg);
-    BuildMI(MBB, DL, TII->get(X86::JE)).addMBB(EndMBB);
+    BuildMI(MBB, DL, TII->get(X86::JE_4)).addMBB(EndMBB);
     MBB->addSuccessor(EndMBB);
   }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
index 1e66475..cf0eb40 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86ISelLowering.h
@@ -19,6 +19,7 @@
 #include "X86RegisterInfo.h"
 #include "X86MachineFunctionInfo.h"
 #include "llvm/Target/TargetLowering.h"
+#include "llvm/Target/TargetOptions.h"
 #include "llvm/CodeGen/FastISel.h"
 #include "llvm/CodeGen/SelectionDAG.h"
 #include "llvm/CodeGen/CallingConvLower.h"
@@ -371,7 +372,6 @@ namespace llvm {
     unsigned VarArgsGPOffset;         // X86-64 vararg func int reg offset.
     unsigned VarArgsFPOffset;         // X86-64 vararg func fp reg offset.
     int BytesToPopOnReturn;           // Number of arg bytes ret should pop.
-    int BytesCallerReserves;          // Number of arg bytes caller makes.
 
   public:
     explicit X86TargetLowering(X86TargetMachine &TM);
@@ -399,10 +399,6 @@ namespace llvm {
     //
     unsigned getBytesToPopOnReturn() const { return BytesToPopOnReturn; }
 
-    // Return the number of bytes that the caller reserves for arguments passed
-    // to this function.
-    unsigned getBytesCallerReserves() const { return BytesCallerReserves; }
- 
     /// getStackPtrReg - Return the stack pointer register we are using: either
     /// ESP or RSP.
     unsigned getStackPtrReg() const { return X86StackPtr; }
@@ -550,16 +546,6 @@ namespace llvm {
       return !X86ScalarSSEf64 || VT == MVT::f80;
     }
     
-    /// IsEligibleForTailCallOptimization - Check whether the call is eligible
-    /// for tail call optimization. Targets which want to do tail call
-    /// optimization should implement this function.
-    virtual bool
-    IsEligibleForTailCallOptimization(SDValue Callee,
-                                      CallingConv::ID CalleeCC,
-                                      bool isVarArg,
-                                      const SmallVectorImpl<ISD::InputArg> &Ins,
-                                      SelectionDAG& DAG) const;
-
     virtual const X86Subtarget* getSubtarget() {
       return Subtarget;
     }
@@ -637,13 +623,22 @@ namespace llvm {
                              ISD::ArgFlagsTy Flags);
 
     // Call lowering helpers.
+
+    /// IsEligibleForTailCallOptimization - Check whether the call is eligible
+    /// for tail call optimization. Targets which want to do tail call
+    /// optimization should implement this function.
+    bool IsEligibleForTailCallOptimization(SDValue Callee,
+                                           CallingConv::ID CalleeCC,
+                                           bool isVarArg,
+                                    const SmallVectorImpl<ISD::OutputArg> &Outs,
+                                    const SmallVectorImpl<ISD::InputArg> &Ins,
+                                           SelectionDAG& DAG) const;
     bool IsCalleePop(bool isVarArg, CallingConv::ID CallConv);
     SDValue EmitTailCallLoadRetAddr(SelectionDAG &DAG, SDValue &OutRetAddr,
                                 SDValue Chain, bool IsTailCall, bool Is64Bit,
                                 int FPDiff, DebugLoc dl);
 
     CCAssignFn *CCAssignFnForNode(CallingConv::ID CallConv) const;
-    NameDecorationStyle NameDecorationForCallConv(CallingConv::ID CallConv);
     unsigned GetAlignedArgumentStackSize(unsigned StackSize, SelectionDAG &DAG);
 
     std::pair<SDValue,SDValue> FP_TO_INTHelper(SDValue Op, SelectionDAG &DAG,
@@ -712,7 +707,7 @@ namespace llvm {
                            SmallVectorImpl<SDValue> &InVals);
     virtual SDValue
       LowerCall(SDValue Chain, SDValue Callee,
-                CallingConv::ID CallConv, bool isVarArg, bool isTailCall,
+                CallingConv::ID CallConv, bool isVarArg, bool &isTailCall,
                 const SmallVectorImpl<ISD::OutputArg> &Outs,
                 const SmallVectorImpl<ISD::InputArg> &Ins,
                 DebugLoc dl, SelectionDAG &DAG,
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
index 9037ba6..4ea3739 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Instr64bit.td
@@ -187,7 +187,7 @@ def TCRETURNri64 : I<0, Pseudo, (outs), (ins GR64:$dst, i32imm:$offset,
 
 
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-  def TAILJMPr64 : I<0xFF, MRM4r, (outs), (ins GR64:$dst),
+  def TAILJMPr64 : I<0xFF, MRM4r, (outs), (ins GR64:$dst, variable_ops),
                    "jmp{q}\t{*}$dst  # TAILCALL",
                    []>;     
 
@@ -435,7 +435,7 @@ def MOVZX64rm32 : I<0x8B, MRMSrcMem, (outs GR64:$dst), (ins i32mem:$src),
 // up to 64 bits.
 def def32 : PatLeaf<(i32 GR32:$src), [{
   return N->getOpcode() != ISD::TRUNCATE &&
-         N->getOpcode() != TargetInstrInfo::EXTRACT_SUBREG &&
+         N->getOpcode() != TargetOpcode::EXTRACT_SUBREG &&
          N->getOpcode() != ISD::CopyFromReg &&
          N->getOpcode() != X86ISD::CMOV;
 }]>;
@@ -893,35 +893,38 @@ def SAR64m1 : RI<0xD1, MRM7m, (outs), (ins i64mem:$dst),
 let isTwoAddress = 1 in {
 def RCL64r1 : RI<0xD1, MRM2r, (outs GR64:$dst), (ins GR64:$src),
                  "rcl{q}\t{1, $dst|$dst, 1}", []>;
-def RCL64m1 : RI<0xD1, MRM2m, (outs i64mem:$dst), (ins i64mem:$src),
-                 "rcl{q}\t{1, $dst|$dst, 1}", []>;
-let Uses = [CL] in {
-def RCL64rCL : RI<0xD3, MRM2r, (outs GR64:$dst), (ins GR64:$src),
-                  "rcl{q}\t{%cl, $dst|$dst, CL}", []>;
-def RCL64mCL : RI<0xD3, MRM2m, (outs i64mem:$dst), (ins i64mem:$src),
-                  "rcl{q}\t{%cl, $dst|$dst, CL}", []>;
-}
 def RCL64ri : RIi8<0xC1, MRM2r, (outs GR64:$dst), (ins GR64:$src, i8imm:$cnt),
                    "rcl{q}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCL64mi : RIi8<0xC1, MRM2m, (outs i64mem:$dst), 
-                   (ins i64mem:$src, i8imm:$cnt),
-                   "rcl{q}\t{$cnt, $dst|$dst, $cnt}", []>;
 
 def RCR64r1 : RI<0xD1, MRM3r, (outs GR64:$dst), (ins GR64:$src),
                  "rcr{q}\t{1, $dst|$dst, 1}", []>;
-def RCR64m1 : RI<0xD1, MRM3m, (outs i64mem:$dst), (ins i64mem:$src),
-                 "rcr{q}\t{1, $dst|$dst, 1}", []>;
+def RCR64ri : RIi8<0xC1, MRM3r, (outs GR64:$dst), (ins GR64:$src, i8imm:$cnt),
+                   "rcr{q}\t{$cnt, $dst|$dst, $cnt}", []>;
+
 let Uses = [CL] in {
+def RCL64rCL : RI<0xD3, MRM2r, (outs GR64:$dst), (ins GR64:$src),
+                  "rcl{q}\t{%cl, $dst|$dst, CL}", []>;
 def RCR64rCL : RI<0xD3, MRM3r, (outs GR64:$dst), (ins GR64:$src),
                   "rcr{q}\t{%cl, $dst|$dst, CL}", []>;
-def RCR64mCL : RI<0xD3, MRM3m, (outs i64mem:$dst), (ins i64mem:$src),
-                  "rcr{q}\t{%cl, $dst|$dst, CL}", []>;
 }
-def RCR64ri : RIi8<0xC1, MRM3r, (outs GR64:$dst), (ins GR64:$src, i8imm:$cnt),
-                   "rcr{q}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCR64mi : RIi8<0xC1, MRM3m, (outs i64mem:$dst), 
-                   (ins i64mem:$src, i8imm:$cnt),
+}
+
+let isTwoAddress = 0 in {
+def RCL64m1 : RI<0xD1, MRM2m, (outs), (ins i64mem:$dst),
+                 "rcl{q}\t{1, $dst|$dst, 1}", []>;
+def RCL64mi : RIi8<0xC1, MRM2m, (outs), (ins i64mem:$dst, i8imm:$cnt),
+                   "rcl{q}\t{$cnt, $dst|$dst, $cnt}", []>;
+def RCR64m1 : RI<0xD1, MRM3m, (outs), (ins i64mem:$dst),
+                 "rcr{q}\t{1, $dst|$dst, 1}", []>;
+def RCR64mi : RIi8<0xC1, MRM3m, (outs), (ins i64mem:$dst, i8imm:$cnt),
                    "rcr{q}\t{$cnt, $dst|$dst, $cnt}", []>;
+
+let Uses = [CL] in {
+def RCL64mCL : RI<0xD3, MRM2m, (outs), (ins i64mem:$dst),
+                  "rcl{q}\t{%cl, $dst|$dst, CL}", []>;
+def RCR64mCL : RI<0xD3, MRM3m, (outs), (ins i64mem:$dst),
+                  "rcr{q}\t{%cl, $dst|$dst, CL}", []>;
+}
 }
 
 let isTwoAddress = 1 in {
@@ -1466,9 +1469,13 @@ def CMOVNO64rm : RI<0x41, MRMSrcMem,       // if !overflow, GR64 = [mem64]
 } // isTwoAddress
 
 // Use sbb to materialize carry flag into a GPR.
+// FIXME: This are pseudo ops that should be replaced with Pat<> patterns.
+// However, Pat<> can't replicate the destination reg into the inputs of the
+// result.
+// FIXME: Change this to have encoding Pseudo when X86MCCodeEmitter replaces
+// X86CodeEmitter.
 let Defs = [EFLAGS], Uses = [EFLAGS], isCodeGenOnly = 1 in
-def SETB_C64r : RI<0x19, MRMInitReg, (outs GR64:$dst), (ins),
-                  "sbb{q}\t$dst, $dst",
+def SETB_C64r : RI<0x19, MRMInitReg, (outs GR64:$dst), (ins), "",
                  [(set GR64:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
 
 def : Pat<(i64 (anyext (i8 (X86setcc_c X86_COND_B, EFLAGS)))),
@@ -1606,8 +1613,7 @@ def SLDT64m : RI<0x00, MRM0m, (outs i16mem:$dst), (ins),
 // when we have a better way to specify isel priority.
 let Defs = [EFLAGS],
     AddedComplexity = 1, isReMaterializable = 1, isAsCheapAsAMove = 1 in
-def MOV64r0   : I<0x31, MRMInitReg, (outs GR64:$dst), (ins),
-                 "",
+def MOV64r0   : I<0x31, MRMInitReg, (outs GR64:$dst), (ins), "",
                  [(set GR64:$dst, 0)]>;
 
 // Materialize i64 constant where top 32-bits are zero. This could theoretically
@@ -1768,7 +1774,7 @@ def LSL64rm : RI<0x03, MRMSrcMem, (outs GR64:$dst), (ins i64mem:$src),
 def LSL64rr : RI<0x03, MRMSrcReg, (outs GR64:$dst), (ins GR64:$src),
                  "lsl{q}\t{$src, $dst|$dst, $src}", []>, TB;
 
-def SWPGS : I<0x01, RawFrm, (outs), (ins), "swpgs", []>, TB;
+def SWAPGS : I<0x01, MRM_F8, (outs), (ins), "swapgs", []>, TB;
 
 def PUSHFS64 : I<0xa0, RawFrm, (outs), (ins),
                  "push{q}\t%fs", []>, TB;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
index 71ec178..e22a903 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFPStack.td
@@ -339,7 +339,6 @@ def FICOMP32m: FPI<0xDA, MRM3m, (outs), (ins i32mem:$src), "ficomp{l}\t$src">;
 def FCOM64m  : FPI<0xDC, MRM2m, (outs), (ins f64mem:$src), "fcom{ll}\t$src">;
 def FCOMP64m : FPI<0xDC, MRM3m, (outs), (ins f64mem:$src), "fcomp{ll}\t$src">;
 
-def FISTTP32m: FPI<0xDD, MRM1m, (outs i32mem:$dst), (ins), "fisttp{l}\t$dst">;
 def FRSTORm  : FPI<0xDD, MRM4m, (outs f32mem:$dst), (ins), "frstor\t$dst">;
 def FSAVEm   : FPI<0xDD, MRM6m, (outs f32mem:$dst), (ins), "fnsave\t$dst">;
 def FNSTSWm  : FPI<0xDD, MRM7m, (outs f32mem:$dst), (ins), "fnstsw\t$dst">;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
index a799f16..bb81cbf 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFormats.td
@@ -29,7 +29,16 @@ def MRM0m  : Format<24>; def MRM1m  : Format<25>; def MRM2m  : Format<26>;
 def MRM3m  : Format<27>; def MRM4m  : Format<28>; def MRM5m  : Format<29>;
 def MRM6m  : Format<30>; def MRM7m  : Format<31>;
 def MRMInitReg : Format<32>;
-
+def MRM_C1 : Format<33>;
+def MRM_C2 : Format<34>;
+def MRM_C3 : Format<35>;
+def MRM_C4 : Format<36>;
+def MRM_C8 : Format<37>;
+def MRM_C9 : Format<38>;
+def MRM_E8 : Format<39>;
+def MRM_F0 : Format<40>;
+def MRM_F8 : Format<41>;
+def MRM_F9 : Format<42>;
 
 // ImmType - This specifies the immediate type used by an instruction. This is
 // part of the ad-hoc solution used to emit machine instruction encodings by our
@@ -37,11 +46,13 @@ def MRMInitReg : Format<32>;
 class ImmType<bits<3> val> {
   bits<3> Value = val;
 }
-def NoImm  : ImmType<0>;
-def Imm8   : ImmType<1>;
-def Imm16  : ImmType<2>;
-def Imm32  : ImmType<3>;
-def Imm64  : ImmType<4>;
+def NoImm      : ImmType<0>;
+def Imm8       : ImmType<1>;
+def Imm8PCRel  : ImmType<2>;
+def Imm16      : ImmType<3>;
+def Imm32      : ImmType<4>;
+def Imm32PCRel : ImmType<5>;
+def Imm64      : ImmType<6>;
 
 // FPFormat - This specifies what form this FP instruction has.  This is used by
 // the Floating-Point stackifier pass.
@@ -121,6 +132,12 @@ class Ii8 <bits<8> o, Format f, dag outs, dag ins, string asm,
   let Pattern = pattern;
   let CodeSize = 3;
 }
+class Ii8PCRel<bits<8> o, Format f, dag outs, dag ins, string asm, 
+               list<dag> pattern>
+  : X86Inst<o, f, Imm8PCRel, outs, ins, asm> {
+  let Pattern = pattern;
+  let CodeSize = 3;
+}
 class Ii16<bits<8> o, Format f, dag outs, dag ins, string asm, 
            list<dag> pattern>
   : X86Inst<o, f, Imm16, outs, ins, asm> {
@@ -134,6 +151,13 @@ class Ii32<bits<8> o, Format f, dag outs, dag ins, string asm,
   let CodeSize = 3;
 }
 
+class Ii32PCRel<bits<8> o, Format f, dag outs, dag ins, string asm, 
+           list<dag> pattern>
+  : X86Inst<o, f, Imm32PCRel, outs, ins, asm> {
+  let Pattern = pattern;
+  let CodeSize = 3;
+}
+
 // FPStack Instruction Templates:
 // FPI - Floating Point Instruction template.
 class FPI<bits<8> o, Format F, dag outs, dag ins, string asm>
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrFragmentsSIMD.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrFragmentsSIMD.td
new file mode 100644
index 0000000..6b9478d
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrFragmentsSIMD.td
@@ -0,0 +1,62 @@
+//======- X86InstrFragmentsSIMD.td - x86 ISA -------------*- tablegen -*-=====//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file provides pattern fragments useful for SIMD instructions.
+//
+//===----------------------------------------------------------------------===//
+
+//===----------------------------------------------------------------------===//
+// MMX Pattern Fragments
+//===----------------------------------------------------------------------===//
+
+def load_mmx : PatFrag<(ops node:$ptr), (v1i64 (load node:$ptr))>;
+
+def bc_v8i8  : PatFrag<(ops node:$in), (v8i8  (bitconvert node:$in))>;
+def bc_v4i16 : PatFrag<(ops node:$in), (v4i16 (bitconvert node:$in))>;
+def bc_v2i32 : PatFrag<(ops node:$in), (v2i32 (bitconvert node:$in))>;
+def bc_v1i64 : PatFrag<(ops node:$in), (v1i64 (bitconvert node:$in))>;
+
+//===----------------------------------------------------------------------===//
+// MMX Masks
+//===----------------------------------------------------------------------===//
+
+// MMX_SHUFFLE_get_shuf_imm xform function: convert vector_shuffle mask to
+// PSHUFW imm.
+def MMX_SHUFFLE_get_shuf_imm : SDNodeXForm<vector_shuffle, [{
+  return getI8Imm(X86::getShuffleSHUFImmediate(N));
+}]>;
+
+// Patterns for: vector_shuffle v1, v2, <2, 6, 3, 7, ...>
+def mmx_unpckh : PatFrag<(ops node:$lhs, node:$rhs),
+                         (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isUNPCKHMask(cast<ShuffleVectorSDNode>(N));
+}]>;
+
+// Patterns for: vector_shuffle v1, v2, <0, 4, 2, 5, ...>
+def mmx_unpckl : PatFrag<(ops node:$lhs, node:$rhs),
+                         (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isUNPCKLMask(cast<ShuffleVectorSDNode>(N));
+}]>;
+
+// Patterns for: vector_shuffle v1, <undef>, <0, 0, 1, 1, ...>
+def mmx_unpckh_undef : PatFrag<(ops node:$lhs, node:$rhs),
+                               (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isUNPCKH_v_undef_Mask(cast<ShuffleVectorSDNode>(N));
+}]>;
+
+// Patterns for: vector_shuffle v1, <undef>, <2, 2, 3, 3, ...>
+def mmx_unpckl_undef : PatFrag<(ops node:$lhs, node:$rhs),
+                               (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isUNPCKL_v_undef_Mask(cast<ShuffleVectorSDNode>(N));
+}]>;
+
+def mmx_pshufw : PatFrag<(ops node:$lhs, node:$rhs),
+                         (vector_shuffle node:$lhs, node:$rhs), [{
+  return X86::isPSHUFDMask(cast<ShuffleVectorSDNode>(N));
+}], MMX_SHUFFLE_get_shuf_imm>;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
index 76c48c3..a0d0312 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.cpp
@@ -1587,44 +1587,44 @@ X86InstrInfo::commuteInstruction(MachineInstr *MI, bool NewMI) const {
 static X86::CondCode GetCondFromBranchOpc(unsigned BrOpc) {
   switch (BrOpc) {
   default: return X86::COND_INVALID;
-  case X86::JE:  return X86::COND_E;
-  case X86::JNE: return X86::COND_NE;
-  case X86::JL:  return X86::COND_L;
-  case X86::JLE: return X86::COND_LE;
-  case X86::JG:  return X86::COND_G;
-  case X86::JGE: return X86::COND_GE;
-  case X86::JB:  return X86::COND_B;
-  case X86::JBE: return X86::COND_BE;
-  case X86::JA:  return X86::COND_A;
-  case X86::JAE: return X86::COND_AE;
-  case X86::JS:  return X86::COND_S;
-  case X86::JNS: return X86::COND_NS;
-  case X86::JP:  return X86::COND_P;
-  case X86::JNP: return X86::COND_NP;
-  case X86::JO:  return X86::COND_O;
-  case X86::JNO: return X86::COND_NO;
+  case X86::JE_4:  return X86::COND_E;
+  case X86::JNE_4: return X86::COND_NE;
+  case X86::JL_4:  return X86::COND_L;
+  case X86::JLE_4: return X86::COND_LE;
+  case X86::JG_4:  return X86::COND_G;
+  case X86::JGE_4: return X86::COND_GE;
+  case X86::JB_4:  return X86::COND_B;
+  case X86::JBE_4: return X86::COND_BE;
+  case X86::JA_4:  return X86::COND_A;
+  case X86::JAE_4: return X86::COND_AE;
+  case X86::JS_4:  return X86::COND_S;
+  case X86::JNS_4: return X86::COND_NS;
+  case X86::JP_4:  return X86::COND_P;
+  case X86::JNP_4: return X86::COND_NP;
+  case X86::JO_4:  return X86::COND_O;
+  case X86::JNO_4: return X86::COND_NO;
   }
 }
 
 unsigned X86::GetCondBranchFromCond(X86::CondCode CC) {
   switch (CC) {
   default: llvm_unreachable("Illegal condition code!");
-  case X86::COND_E:  return X86::JE;
-  case X86::COND_NE: return X86::JNE;
-  case X86::COND_L:  return X86::JL;
-  case X86::COND_LE: return X86::JLE;
-  case X86::COND_G:  return X86::JG;
-  case X86::COND_GE: return X86::JGE;
-  case X86::COND_B:  return X86::JB;
-  case X86::COND_BE: return X86::JBE;
-  case X86::COND_A:  return X86::JA;
-  case X86::COND_AE: return X86::JAE;
-  case X86::COND_S:  return X86::JS;
-  case X86::COND_NS: return X86::JNS;
-  case X86::COND_P:  return X86::JP;
-  case X86::COND_NP: return X86::JNP;
-  case X86::COND_O:  return X86::JO;
-  case X86::COND_NO: return X86::JNO;
+  case X86::COND_E:  return X86::JE_4;
+  case X86::COND_NE: return X86::JNE_4;
+  case X86::COND_L:  return X86::JL_4;
+  case X86::COND_LE: return X86::JLE_4;
+  case X86::COND_G:  return X86::JG_4;
+  case X86::COND_GE: return X86::JGE_4;
+  case X86::COND_B:  return X86::JB_4;
+  case X86::COND_BE: return X86::JBE_4;
+  case X86::COND_A:  return X86::JA_4;
+  case X86::COND_AE: return X86::JAE_4;
+  case X86::COND_S:  return X86::JS_4;
+  case X86::COND_NS: return X86::JNS_4;
+  case X86::COND_P:  return X86::JP_4;
+  case X86::COND_NP: return X86::JNP_4;
+  case X86::COND_O:  return X86::JO_4;
+  case X86::COND_NO: return X86::JNO_4;
   }
 }
 
@@ -1694,7 +1694,7 @@ bool X86InstrInfo::AnalyzeBranch(MachineBasicBlock &MBB,
       return true;
 
     // Handle unconditional branches.
-    if (I->getOpcode() == X86::JMP) {
+    if (I->getOpcode() == X86::JMP_4) {
       if (!AllowModify) {
         TBB = I->getOperand(0).getMBB();
         continue;
@@ -1778,7 +1778,7 @@ unsigned X86InstrInfo::RemoveBranch(MachineBasicBlock &MBB) const {
 
   while (I != MBB.begin()) {
     --I;
-    if (I->getOpcode() != X86::JMP &&
+    if (I->getOpcode() != X86::JMP_4 &&
         GetCondFromBranchOpc(I->getOpcode()) == X86::COND_INVALID)
       break;
     // Remove the branch.
@@ -1804,7 +1804,7 @@ X86InstrInfo::InsertBranch(MachineBasicBlock &MBB, MachineBasicBlock *TBB,
   if (Cond.empty()) {
     // Unconditional branch?
     assert(!FBB && "Unconditional branch with multiple successors!");
-    BuildMI(&MBB, dl, get(X86::JMP)).addMBB(TBB);
+    BuildMI(&MBB, dl, get(X86::JMP_4)).addMBB(TBB);
     return 1;
   }
 
@@ -1814,16 +1814,16 @@ X86InstrInfo::InsertBranch(MachineBasicBlock &MBB, MachineBasicBlock *TBB,
   switch (CC) {
   case X86::COND_NP_OR_E:
     // Synthesize NP_OR_E with two branches.
-    BuildMI(&MBB, dl, get(X86::JNP)).addMBB(TBB);
+    BuildMI(&MBB, dl, get(X86::JNP_4)).addMBB(TBB);
     ++Count;
-    BuildMI(&MBB, dl, get(X86::JE)).addMBB(TBB);
+    BuildMI(&MBB, dl, get(X86::JE_4)).addMBB(TBB);
     ++Count;
     break;
   case X86::COND_NE_OR_P:
     // Synthesize NE_OR_P with two branches.
-    BuildMI(&MBB, dl, get(X86::JNE)).addMBB(TBB);
+    BuildMI(&MBB, dl, get(X86::JNE_4)).addMBB(TBB);
     ++Count;
-    BuildMI(&MBB, dl, get(X86::JP)).addMBB(TBB);
+    BuildMI(&MBB, dl, get(X86::JP_4)).addMBB(TBB);
     ++Count;
     break;
   default: {
@@ -1834,7 +1834,7 @@ X86InstrInfo::InsertBranch(MachineBasicBlock &MBB, MachineBasicBlock *TBB,
   }
   if (FBB) {
     // Two-way Conditional branch. Insert the second branch.
-    BuildMI(&MBB, dl, get(X86::JMP)).addMBB(FBB);
+    BuildMI(&MBB, dl, get(X86::JMP_4)).addMBB(FBB);
     ++Count;
   }
   return Count;
@@ -3014,22 +3014,11 @@ isSafeToMoveRegClassDefs(const TargetRegisterClass *RC) const {
            RC == &X86::RFP64RegClass || RC == &X86::RFP80RegClass);
 }
 
-unsigned X86InstrInfo::sizeOfImm(const TargetInstrDesc *Desc) {
-  switch (Desc->TSFlags & X86II::ImmMask) {
-  case X86II::Imm8:   return 1;
-  case X86II::Imm16:  return 2;
-  case X86II::Imm32:  return 4;
-  case X86II::Imm64:  return 8;
-  default: llvm_unreachable("Immediate size not set!");
-    return 0;
-  }
-}
 
-/// isX86_64ExtendedReg - Is the MachineOperand a x86-64 extended register?
-/// e.g. r8, xmm8, etc.
-bool X86InstrInfo::isX86_64ExtendedReg(const MachineOperand &MO) {
-  if (!MO.isReg()) return false;
-  switch (MO.getReg()) {
+/// isX86_64ExtendedReg - Is the MachineOperand a x86-64 extended (r8 or higher)
+/// register?  e.g. r8, xmm8, xmm13, etc.
+bool X86InstrInfo::isX86_64ExtendedReg(unsigned RegNo) {
+  switch (RegNo) {
   default: break;
   case X86::R8:    case X86::R9:    case X86::R10:   case X86::R11:
   case X86::R12:   case X86::R13:   case X86::R14:   case X86::R15:
@@ -3383,24 +3372,24 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     switch (Opcode) {
     default: 
       break;
-    case TargetInstrInfo::INLINEASM: {
+    case TargetOpcode::INLINEASM: {
       const MachineFunction *MF = MI.getParent()->getParent();
       const TargetInstrInfo &TII = *MF->getTarget().getInstrInfo();
       FinalSize += TII.getInlineAsmLength(MI.getOperand(0).getSymbolName(),
                                           *MF->getTarget().getMCAsmInfo());
       break;
     }
-    case TargetInstrInfo::DBG_LABEL:
-    case TargetInstrInfo::EH_LABEL:
+    case TargetOpcode::DBG_LABEL:
+    case TargetOpcode::EH_LABEL:
       break;
-    case TargetInstrInfo::IMPLICIT_DEF:
-    case TargetInstrInfo::KILL:
+    case TargetOpcode::IMPLICIT_DEF:
+    case TargetOpcode::KILL:
     case X86::FP_REG_KILL:
       break;
     case X86::MOVPC32r: {
       // This emits the "call" portion of this pseudo instruction.
       ++FinalSize;
-      FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+      FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
       break;
     }
     }
@@ -3418,7 +3407,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
       } else if (MO.isSymbol()) {
         FinalSize += sizeExternalSymbolAddress(false);
       } else if (MO.isImm()) {
-        FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+        FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
       } else {
         llvm_unreachable("Unknown RawFrm operand!");
       }
@@ -3431,7 +3420,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     
     if (CurOp != NumOps) {
       const MachineOperand &MO1 = MI.getOperand(CurOp++);
-      unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+      unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
       if (MO1.isImm())
         FinalSize += sizeConstant(Size);
       else {
@@ -3456,7 +3445,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     CurOp += 2;
     if (CurOp != NumOps) {
       ++CurOp;
-      FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+      FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
     }
     break;
   }
@@ -3466,7 +3455,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     CurOp +=  X86AddrNumOperands + 1;
     if (CurOp != NumOps) {
       ++CurOp;
-      FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+      FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
     }
     break;
   }
@@ -3477,7 +3466,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     CurOp += 2;
     if (CurOp != NumOps) {
       ++CurOp;
-      FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+      FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
     }
     break;
 
@@ -3494,7 +3483,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
     CurOp += AddrOperands + 1;
     if (CurOp != NumOps) {
       ++CurOp;
-      FinalSize += sizeConstant(X86InstrInfo::sizeOfImm(Desc));
+      FinalSize += sizeConstant(X86II::getSizeOfImm(Desc->TSFlags));
     }
     break;
   }
@@ -3519,7 +3508,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
 
     if (CurOp != NumOps) {
       const MachineOperand &MO1 = MI.getOperand(CurOp++);
-      unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+      unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
       if (MO1.isImm())
         FinalSize += sizeConstant(Size);
       else {
@@ -3549,7 +3538,7 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
 
     if (CurOp != NumOps) {
       const MachineOperand &MO = MI.getOperand(CurOp++);
-      unsigned Size = X86InstrInfo::sizeOfImm(Desc);
+      unsigned Size = X86II::getSizeOfImm(Desc->TSFlags);
       if (MO.isImm())
         FinalSize += sizeConstant(Size);
       else {
@@ -3567,6 +3556,14 @@ static unsigned GetInstSizeWithDesc(const MachineInstr &MI,
       }
     }
     break;
+    
+  case X86II::MRM_C1:
+  case X86II::MRM_C8:
+  case X86II::MRM_C9:
+  case X86II::MRM_E8:
+  case X86II::MRM_F0:
+    FinalSize += 2;
+    break;
   }
 
   case X86II::MRMInitReg:
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
index 4f35d0d..5111719 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.h
@@ -18,7 +18,6 @@
 #include "X86.h"
 #include "X86RegisterInfo.h"
 #include "llvm/ADT/DenseMap.h"
-#include "llvm/Target/TargetRegisterInfo.h"
 
 namespace llvm {
   class X86RegisterInfo;
@@ -269,6 +268,18 @@ namespace X86II {
     // MRMInitReg - This form is used for instructions whose source and
     // destinations are the same register.
     MRMInitReg = 32,
+    
+    //// MRM_C1 - A mod/rm byte of exactly 0xC1.
+    MRM_C1 = 33,
+    MRM_C2 = 34,
+    MRM_C3 = 35,
+    MRM_C4 = 36,
+    MRM_C8 = 37,
+    MRM_C9 = 38,
+    MRM_E8 = 39,
+    MRM_F0 = 40,
+    MRM_F8 = 41,
+    MRM_F9 = 42,
 
     FormMask       = 63,
 
@@ -332,11 +343,13 @@ namespace X86II {
     // This three-bit field describes the size of an immediate operand.  Zero is
     // unused so that we can tell if we forgot to set a value.
     ImmShift = 13,
-    ImmMask  = 7 << ImmShift,
-    Imm8     = 1 << ImmShift,
-    Imm16    = 2 << ImmShift,
-    Imm32    = 3 << ImmShift,
-    Imm64    = 4 << ImmShift,
+    ImmMask    = 7 << ImmShift,
+    Imm8       = 1 << ImmShift,
+    Imm8PCRel  = 2 << ImmShift,
+    Imm16      = 3 << ImmShift,
+    Imm32      = 4 << ImmShift,
+    Imm32PCRel = 5 << ImmShift,
+    Imm64      = 6 << ImmShift,
 
     //===------------------------------------------------------------------===//
     // FP Instruction Classification...  Zero is non-fp instruction.
@@ -389,6 +402,47 @@ namespace X86II {
     OpcodeShift   = 24,
     OpcodeMask    = 0xFF << OpcodeShift
   };
+  
+  // getBaseOpcodeFor - This function returns the "base" X86 opcode for the
+  // specified machine instruction.
+  //
+  static inline unsigned char getBaseOpcodeFor(unsigned TSFlags) {
+    return TSFlags >> X86II::OpcodeShift;
+  }
+  
+  static inline bool hasImm(unsigned TSFlags) {
+    return (TSFlags & X86II::ImmMask) != 0;
+  }
+  
+  /// getSizeOfImm - Decode the "size of immediate" field from the TSFlags field
+  /// of the specified instruction.
+  static inline unsigned getSizeOfImm(unsigned TSFlags) {
+    switch (TSFlags & X86II::ImmMask) {
+    default: assert(0 && "Unknown immediate size");
+    case X86II::Imm8:
+    case X86II::Imm8PCRel:  return 1;
+    case X86II::Imm16:      return 2;
+    case X86II::Imm32:
+    case X86II::Imm32PCRel: return 4;
+    case X86II::Imm64:      return 8;
+    }
+  }
+  
+  /// isImmPCRel - Return true if the immediate of the specified instruction's
+  /// TSFlags indicates that it is pc relative.
+  static inline unsigned isImmPCRel(unsigned TSFlags) {
+    switch (TSFlags & X86II::ImmMask) {
+      default: assert(0 && "Unknown immediate size");
+      case X86II::Imm8PCRel:
+      case X86II::Imm32PCRel:
+        return true;
+      case X86II::Imm8:
+      case X86II::Imm16:
+      case X86II::Imm32:
+      case X86II::Imm64:
+        return false;
+    }
+  }    
 }
 
 const int X86AddrNumOperands = 5;
@@ -637,25 +691,21 @@ public:
   /// instruction that defines the specified register class.
   bool isSafeToMoveRegClassDefs(const TargetRegisterClass *RC) const;
 
-  // getBaseOpcodeFor - This function returns the "base" X86 opcode for the
-  // specified machine instruction.
-  //
-  unsigned char getBaseOpcodeFor(const TargetInstrDesc *TID) const {
-    return TID->TSFlags >> X86II::OpcodeShift;
-  }
-  unsigned char getBaseOpcodeFor(unsigned Opcode) const {
-    return getBaseOpcodeFor(&get(Opcode));
-  }
-  
   static bool isX86_64NonExtLowByteReg(unsigned reg) {
     return (reg == X86::SPL || reg == X86::BPL ||
           reg == X86::SIL || reg == X86::DIL);
   }
   
-  static unsigned sizeOfImm(const TargetInstrDesc *Desc);
-  static bool isX86_64ExtendedReg(const MachineOperand &MO);
+  static bool isX86_64ExtendedReg(const MachineOperand &MO) {
+    if (!MO.isReg()) return false;
+    return isX86_64ExtendedReg(MO.getReg());
+  }
   static unsigned determineREX(const MachineInstr &MI);
 
+  /// isX86_64ExtendedReg - Is the MachineOperand a x86-64 extended (r8 or
+  /// higher) register?  e.g. r8, xmm8, xmm13, etc.
+  static bool isX86_64ExtendedReg(unsigned RegNo);
+
   /// GetInstSize - Returns the size of the specified MachineInstr.
   ///
   virtual unsigned GetInstSizeInBytes(const MachineInstr *MI) const;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
index 396cb53..25cd297 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrInfo.td
@@ -182,10 +182,6 @@ def X86mul_imm : SDNode<"X86ISD::MUL_IMM", SDTIntBinOp>;
 // X86 Operand Definitions.
 //
 
-def i32imm_pcrel : Operand<i32> {
-  let PrintMethod = "print_pcrel_imm";
-}
-
 // A version of ptr_rc which excludes SP, ESP, and RSP. This is used for
 // the index operand of an address, to conform to x86 encoding restrictions.
 def ptr_rc_nosp : PointerLikeRegClass<1>;
@@ -196,6 +192,14 @@ def X86MemAsmOperand : AsmOperandClass {
   let Name = "Mem";
   let SuperClass = ?;
 }
+def X86AbsMemAsmOperand : AsmOperandClass {
+  let Name = "AbsMem";
+  let SuperClass = X86MemAsmOperand;
+}
+def X86NoSegMemAsmOperand : AsmOperandClass {
+  let Name = "NoSegMem";
+  let SuperClass = X86MemAsmOperand;
+}
 class X86MemOperand<string printMethod> : Operand<iPTR> {
   let PrintMethod = printMethod;
   let MIOperandInfo = (ops ptr_rc, i8imm, ptr_rc_nosp, i32imm, i8imm);
@@ -207,11 +211,6 @@ def opaque48mem : X86MemOperand<"printopaquemem">;
 def opaque80mem : X86MemOperand<"printopaquemem">;
 def opaque512mem : X86MemOperand<"printopaquemem">;
 
-def offset8 : Operand<i64>  { let PrintMethod = "print_pcrel_imm"; }
-def offset16 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
-def offset32 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
-def offset64 : Operand<i64> { let PrintMethod = "print_pcrel_imm"; }
-
 def i8mem   : X86MemOperand<"printi8mem">;
 def i16mem  : X86MemOperand<"printi16mem">;
 def i32mem  : X86MemOperand<"printi32mem">;
@@ -235,7 +234,22 @@ def i8mem_NOREX : Operand<i64> {
 def lea32mem : Operand<i32> {
   let PrintMethod = "printlea32mem";
   let MIOperandInfo = (ops GR32, i8imm, GR32_NOSP, i32imm);
-  let ParserMatchClass = X86MemAsmOperand;
+  let ParserMatchClass = X86NoSegMemAsmOperand;
+}
+
+let ParserMatchClass = X86AbsMemAsmOperand,
+    PrintMethod = "print_pcrel_imm" in {
+def i32imm_pcrel : Operand<i32>;
+
+def offset8 : Operand<i64>;
+def offset16 : Operand<i64>;
+def offset32 : Operand<i64>;
+def offset64 : Operand<i64>;
+
+// Branch targets have OtherVT type and print as pc-relative values.
+def brtarget : Operand<OtherVT>;
+def brtarget8 : Operand<OtherVT>;
+
 }
 
 def SSECC : Operand<i8> {
@@ -257,15 +271,6 @@ def i32i8imm  : Operand<i32> {
   let ParserMatchClass = ImmSExt8AsmOperand;
 }
 
-// Branch targets have OtherVT type and print as pc-relative values.
-def brtarget : Operand<OtherVT> {
-  let PrintMethod = "print_pcrel_imm";
-}
-
-def brtarget8 : Operand<OtherVT> {
-  let PrintMethod = "print_pcrel_imm";
-}
-
 //===----------------------------------------------------------------------===//
 // X86 Complex Pattern Definitions.
 //
@@ -591,7 +596,7 @@ let neverHasSideEffects = 1, isNotDuplicable = 1, Uses = [ESP] in
                       "", []>;
 
 //===----------------------------------------------------------------------===//
-//  Control Flow Instructions...
+//  Control Flow Instructions.
 //
 
 // Return instructions.
@@ -609,16 +614,46 @@ let isTerminator = 1, isReturn = 1, isBarrier = 1,
                     "lret\t$amt", []>;
 }
 
-// All branches are RawFrm, Void, Branch, and Terminators
-let isBranch = 1, isTerminator = 1 in
-  class IBr<bits<8> opcode, dag ins, string asm, list<dag> pattern> :
-        I<opcode, RawFrm, (outs), ins, asm, pattern>;
+// Unconditional branches.
+let isBarrier = 1, isBranch = 1, isTerminator = 1 in {
+  def JMP_4 : Ii32PCRel<0xE9, RawFrm, (outs), (ins brtarget:$dst),
+                        "jmp\t$dst", [(br bb:$dst)]>;
+  def JMP_1 : Ii8PCRel<0xEB, RawFrm, (outs), (ins brtarget8:$dst),
+                       "jmp\t$dst", []>;
+}
 
-let isBranch = 1, isBarrier = 1 in {
-  def JMP : IBr<0xE9, (ins brtarget:$dst), "jmp\t$dst", [(br bb:$dst)]>;
-  def JMP8 : IBr<0xEB, (ins brtarget8:$dst), "jmp\t$dst", []>;
+// Conditional Branches.
+let isBranch = 1, isTerminator = 1, Uses = [EFLAGS] in {
+  multiclass ICBr<bits<8> opc1, bits<8> opc4, string asm, PatFrag Cond> {
+    def _1 : Ii8PCRel <opc1, RawFrm, (outs), (ins brtarget8:$dst), asm, []>;
+    def _4 : Ii32PCRel<opc4, RawFrm, (outs), (ins brtarget:$dst), asm,
+                       [(X86brcond bb:$dst, Cond, EFLAGS)]>, TB;
+  }
 }
 
+defm JO  : ICBr<0x70, 0x80, "jo\t$dst" , X86_COND_O>;
+defm JNO : ICBr<0x71, 0x81, "jno\t$dst" , X86_COND_NO>;
+defm JB  : ICBr<0x72, 0x82, "jb\t$dst" , X86_COND_B>;
+defm JAE : ICBr<0x73, 0x83, "jae\t$dst", X86_COND_AE>;
+defm JE  : ICBr<0x74, 0x84, "je\t$dst" , X86_COND_E>;
+defm JNE : ICBr<0x75, 0x85, "jne\t$dst", X86_COND_NE>;
+defm JBE : ICBr<0x76, 0x86, "jbe\t$dst", X86_COND_BE>;
+defm JA  : ICBr<0x77, 0x87, "ja\t$dst" , X86_COND_A>;
+defm JS  : ICBr<0x78, 0x88, "js\t$dst" , X86_COND_S>;
+defm JNS : ICBr<0x79, 0x89, "jns\t$dst", X86_COND_NS>;
+defm JP  : ICBr<0x7A, 0x8A, "jp\t$dst" , X86_COND_P>;
+defm JNP : ICBr<0x7B, 0x8B, "jnp\t$dst", X86_COND_NP>;
+defm JL  : ICBr<0x7C, 0x8C, "jl\t$dst" , X86_COND_L>;
+defm JGE : ICBr<0x7D, 0x8D, "jge\t$dst", X86_COND_GE>;
+defm JLE : ICBr<0x7E, 0x8E, "jle\t$dst", X86_COND_LE>;
+defm JG  : ICBr<0x7F, 0x8F, "jg\t$dst" , X86_COND_G>;
+
+// FIXME: What about the CX/RCX versions of this instruction?
+let Uses = [ECX], isBranch = 1, isTerminator = 1 in
+  def JCXZ8 : Ii8PCRel<0xE3, RawFrm, (outs), (ins brtarget8:$dst),
+                       "jcxz\t$dst", []>;
+
+
 // Indirect branches
 let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
   def JMP32r     : I<0xFF, MRM4r, (outs), (ins GR32:$dst), "jmp{l}\t{*}$dst",
@@ -639,63 +674,6 @@ let isBranch = 1, isTerminator = 1, isBarrier = 1, isIndirectBranch = 1 in {
                      "ljmp{l}\t{*}$dst", []>;
 }
 
-// Conditional branches
-let Uses = [EFLAGS] in {
-// Short conditional jumps
-def JO8   : IBr<0x70, (ins brtarget8:$dst), "jo\t$dst", []>;
-def JNO8  : IBr<0x71, (ins brtarget8:$dst), "jno\t$dst", []>;
-def JB8   : IBr<0x72, (ins brtarget8:$dst), "jb\t$dst", []>;
-def JAE8  : IBr<0x73, (ins brtarget8:$dst), "jae\t$dst", []>;
-def JE8   : IBr<0x74, (ins brtarget8:$dst), "je\t$dst", []>;
-def JNE8  : IBr<0x75, (ins brtarget8:$dst), "jne\t$dst", []>;
-def JBE8  : IBr<0x76, (ins brtarget8:$dst), "jbe\t$dst", []>;
-def JA8   : IBr<0x77, (ins brtarget8:$dst), "ja\t$dst", []>;
-def JS8   : IBr<0x78, (ins brtarget8:$dst), "js\t$dst", []>;
-def JNS8  : IBr<0x79, (ins brtarget8:$dst), "jns\t$dst", []>;
-def JP8   : IBr<0x7A, (ins brtarget8:$dst), "jp\t$dst", []>;
-def JNP8  : IBr<0x7B, (ins brtarget8:$dst), "jnp\t$dst", []>;
-def JL8   : IBr<0x7C, (ins brtarget8:$dst), "jl\t$dst", []>;
-def JGE8  : IBr<0x7D, (ins brtarget8:$dst), "jge\t$dst", []>;
-def JLE8  : IBr<0x7E, (ins brtarget8:$dst), "jle\t$dst", []>;
-def JG8   : IBr<0x7F, (ins brtarget8:$dst), "jg\t$dst", []>;
-
-def JCXZ8 : IBr<0xE3, (ins brtarget8:$dst), "jcxz\t$dst", []>;
-
-def JE  : IBr<0x84, (ins brtarget:$dst), "je\t$dst",
-              [(X86brcond bb:$dst, X86_COND_E, EFLAGS)]>, TB;
-def JNE : IBr<0x85, (ins brtarget:$dst), "jne\t$dst",
-              [(X86brcond bb:$dst, X86_COND_NE, EFLAGS)]>, TB;
-def JL  : IBr<0x8C, (ins brtarget:$dst), "jl\t$dst",
-              [(X86brcond bb:$dst, X86_COND_L, EFLAGS)]>, TB;
-def JLE : IBr<0x8E, (ins brtarget:$dst), "jle\t$dst",
-              [(X86brcond bb:$dst, X86_COND_LE, EFLAGS)]>, TB;
-def JG  : IBr<0x8F, (ins brtarget:$dst), "jg\t$dst",
-              [(X86brcond bb:$dst, X86_COND_G, EFLAGS)]>, TB;
-def JGE : IBr<0x8D, (ins brtarget:$dst), "jge\t$dst",
-              [(X86brcond bb:$dst, X86_COND_GE, EFLAGS)]>, TB;
-
-def JB  : IBr<0x82, (ins brtarget:$dst), "jb\t$dst",
-              [(X86brcond bb:$dst, X86_COND_B, EFLAGS)]>, TB;
-def JBE : IBr<0x86, (ins brtarget:$dst), "jbe\t$dst",
-              [(X86brcond bb:$dst, X86_COND_BE, EFLAGS)]>, TB;
-def JA  : IBr<0x87, (ins brtarget:$dst), "ja\t$dst",
-              [(X86brcond bb:$dst, X86_COND_A, EFLAGS)]>, TB;
-def JAE : IBr<0x83, (ins brtarget:$dst), "jae\t$dst",
-              [(X86brcond bb:$dst, X86_COND_AE, EFLAGS)]>, TB;
-
-def JS  : IBr<0x88, (ins brtarget:$dst), "js\t$dst",
-              [(X86brcond bb:$dst, X86_COND_S, EFLAGS)]>, TB;
-def JNS : IBr<0x89, (ins brtarget:$dst), "jns\t$dst",
-              [(X86brcond bb:$dst, X86_COND_NS, EFLAGS)]>, TB;
-def JP  : IBr<0x8A, (ins brtarget:$dst), "jp\t$dst",
-              [(X86brcond bb:$dst, X86_COND_P, EFLAGS)]>, TB;
-def JNP : IBr<0x8B, (ins brtarget:$dst), "jnp\t$dst",
-              [(X86brcond bb:$dst, X86_COND_NP, EFLAGS)]>, TB;
-def JO  : IBr<0x80, (ins brtarget:$dst), "jo\t$dst",
-              [(X86brcond bb:$dst, X86_COND_O, EFLAGS)]>, TB;
-def JNO : IBr<0x81, (ins brtarget:$dst), "jno\t$dst",
-              [(X86brcond bb:$dst, X86_COND_NO, EFLAGS)]>, TB;
-} // Uses = [EFLAGS]
 
 // Loop instructions
 
@@ -716,7 +694,7 @@ let isCall = 1 in
               XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7,
               XMM8, XMM9, XMM10, XMM11, XMM12, XMM13, XMM14, XMM15, EFLAGS],
       Uses = [ESP] in {
-    def CALLpcrel32 : Ii32<0xE8, RawFrm,
+    def CALLpcrel32 : Ii32PCRel<0xE8, RawFrm,
                            (outs), (ins i32imm_pcrel:$dst,variable_ops),
                            "call\t$dst", []>;
     def CALL32r     : I<0xFF, MRM2r, (outs), (ins GR32:$dst, variable_ops),
@@ -756,15 +734,18 @@ def TCRETURNri : I<0, Pseudo, (outs),
                  "#TC_RETURN $dst $offset",
                  []>;
 
-let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-  def TAILJMPd : IBr<0xE9, (ins i32imm_pcrel:$dst), "jmp\t$dst  # TAILCALL",
+// FIXME: The should be pseudo instructions that are lowered when going to
+// mcinst.
+let isCall = 1, isBranch = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
+  def TAILJMPd : Ii32<0xE9, RawFrm, (outs),(ins i32imm_pcrel:$dst,variable_ops),
+                 "jmp\t$dst  # TAILCALL",
                  []>;
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-  def TAILJMPr : I<0xFF, MRM4r, (outs), (ins GR32:$dst), 
+  def TAILJMPr : I<0xFF, MRM4r, (outs), (ins GR32:$dst, variable_ops), 
                    "jmp{l}\t{*}$dst  # TAILCALL",
                  []>;     
 let isCall = 1, isTerminator = 1, isReturn = 1, isBarrier = 1 in
-  def TAILJMPm : I<0xFF, MRM4m, (outs), (ins i32mem:$dst),
+  def TAILJMPm : I<0xFF, MRM4m, (outs), (ins i32mem:$dst, variable_ops),
                    "jmp\t{*}$dst  # TAILCALL", []>;
 
 //===----------------------------------------------------------------------===//
@@ -877,7 +858,7 @@ def LEA32r   : I<0x8D, MRMSrcMem,
                  "lea{l}\t{$src|$dst}, {$dst|$src}",
                  [(set GR32:$dst, lea32addr:$src)]>, Requires<[In32BitMode]>;
 
-let Defs = [ECX,EDI,ESI], Uses = [ECX,EDI,ESI] in {
+let Defs = [ECX,EDI,ESI], Uses = [ECX,EDI,ESI], isCodeGenOnly = 1 in {
 def REP_MOVSB : I<0xA4, RawFrm, (outs), (ins), "{rep;movsb|rep movsb}",
                   [(X86rep_movs i8)]>, REP;
 def REP_MOVSW : I<0xA5, RawFrm, (outs), (ins), "{rep;movsw|rep movsw}",
@@ -886,16 +867,31 @@ def REP_MOVSD : I<0xA5, RawFrm, (outs), (ins), "{rep;movsl|rep movsd}",
                   [(X86rep_movs i32)]>, REP;
 }
 
-let Defs = [ECX,EDI], Uses = [AL,ECX,EDI] in
+// These uses the DF flag in the EFLAGS register to inc or dec EDI and ESI
+let Defs = [EDI,ESI], Uses = [EDI,ESI,EFLAGS] in {
+def MOVSB : I<0xA4, RawFrm, (outs), (ins), "{movsb}", []>;
+def MOVSW : I<0xA5, RawFrm, (outs), (ins), "{movsw}", []>, OpSize;
+def MOVSD : I<0xA5, RawFrm, (outs), (ins), "{movsl|movsd}", []>;
+}
+
+let Defs = [ECX,EDI], Uses = [AL,ECX,EDI], isCodeGenOnly = 1 in
 def REP_STOSB : I<0xAA, RawFrm, (outs), (ins), "{rep;stosb|rep stosb}",
                   [(X86rep_stos i8)]>, REP;
-let Defs = [ECX,EDI], Uses = [AX,ECX,EDI] in
+let Defs = [ECX,EDI], Uses = [AX,ECX,EDI], isCodeGenOnly = 1 in
 def REP_STOSW : I<0xAB, RawFrm, (outs), (ins), "{rep;stosw|rep stosw}",
                   [(X86rep_stos i16)]>, REP, OpSize;
-let Defs = [ECX,EDI], Uses = [EAX,ECX,EDI] in
+let Defs = [ECX,EDI], Uses = [EAX,ECX,EDI], isCodeGenOnly = 1 in
 def REP_STOSD : I<0xAB, RawFrm, (outs), (ins), "{rep;stosl|rep stosd}",
                   [(X86rep_stos i32)]>, REP;
 
+// These uses the DF flag in the EFLAGS register to inc or dec EDI and ESI
+let Defs = [EDI], Uses = [AL,EDI,EFLAGS] in
+def STOSB : I<0xAA, RawFrm, (outs), (ins), "{stosb}", []>;
+let Defs = [EDI], Uses = [AX,EDI,EFLAGS] in
+def STOSW : I<0xAB, RawFrm, (outs), (ins), "{stosw}", []>, OpSize;
+let Defs = [EDI], Uses = [EAX,EDI,EFLAGS] in
+def STOSD : I<0xAB, RawFrm, (outs), (ins), "{stosl|stosd}", []>;
+
 def SCAS8 : I<0xAE, RawFrm, (outs), (ins), "scas{b}", []>;
 def SCAS16 : I<0xAF, RawFrm, (outs), (ins), "scas{w}", []>, OpSize;
 def SCAS32 : I<0xAF, RawFrm, (outs), (ins), "scas{l}", []>;
@@ -908,6 +904,9 @@ let Defs = [RAX, RDX] in
 def RDTSC : I<0x31, RawFrm, (outs), (ins), "rdtsc", [(X86rdtsc)]>,
             TB;
 
+let Defs = [RAX, RCX, RDX] in
+def RDTSCP : I<0x01, MRM_F9, (outs), (ins), "rdtscp", []>, TB;
+
 let isBarrier = 1, hasCtrlDep = 1 in {
 def TRAP    : I<0x0B, RawFrm, (outs), (ins), "ud2", [(trap)]>, TB;
 }
@@ -996,6 +995,7 @@ def MOV32ri : Ii32<0xB8, AddRegFrm, (outs GR32:$dst), (ins i32imm:$src),
                    "mov{l}\t{$src, $dst|$dst, $src}",
                    [(set GR32:$dst, imm:$src)]>;
 }
+
 def MOV8mi  : Ii8 <0xC6, MRM0m, (outs), (ins i8mem :$dst, i8imm :$src),
                    "mov{b}\t{$src, $dst|$dst, $src}",
                    [(store (i8 imm:$src), addr:$dst)]>;
@@ -2306,98 +2306,100 @@ let isTwoAddress = 0 in {
 
 def RCL8r1 : I<0xD0, MRM2r, (outs GR8:$dst), (ins GR8:$src),
                "rcl{b}\t{1, $dst|$dst, 1}", []>;
-def RCL8m1 : I<0xD0, MRM2m, (outs i8mem:$dst), (ins i8mem:$src),
-               "rcl{b}\t{1, $dst|$dst, 1}", []>;
 let Uses = [CL] in {
 def RCL8rCL : I<0xD2, MRM2r, (outs GR8:$dst), (ins GR8:$src),
                 "rcl{b}\t{%cl, $dst|$dst, CL}", []>;
-def RCL8mCL : I<0xD2, MRM2m, (outs i8mem:$dst), (ins i8mem:$src),
-                "rcl{b}\t{%cl, $dst|$dst, CL}", []>;
 }
 def RCL8ri : Ii8<0xC0, MRM2r, (outs GR8:$dst), (ins GR8:$src, i8imm:$cnt),
                  "rcl{b}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCL8mi : Ii8<0xC0, MRM2m, (outs i8mem:$dst), (ins i8mem:$src, i8imm:$cnt),
-                 "rcl{b}\t{$cnt, $dst|$dst, $cnt}", []>;
   
 def RCL16r1 : I<0xD1, MRM2r, (outs GR16:$dst), (ins GR16:$src),
                 "rcl{w}\t{1, $dst|$dst, 1}", []>, OpSize;
-def RCL16m1 : I<0xD1, MRM2m, (outs i16mem:$dst), (ins i16mem:$src),
-                "rcl{w}\t{1, $dst|$dst, 1}", []>, OpSize;
 let Uses = [CL] in {
 def RCL16rCL : I<0xD3, MRM2r, (outs GR16:$dst), (ins GR16:$src),
                  "rcl{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
-def RCL16mCL : I<0xD3, MRM2m, (outs i16mem:$dst), (ins i16mem:$src),
-                 "rcl{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
 }
 def RCL16ri : Ii8<0xC1, MRM2r, (outs GR16:$dst), (ins GR16:$src, i8imm:$cnt),
                   "rcl{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
-def RCL16mi : Ii8<0xC1, MRM2m, (outs i16mem:$dst), 
-                  (ins i16mem:$src, i8imm:$cnt),
-                  "rcl{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
 
 def RCL32r1 : I<0xD1, MRM2r, (outs GR32:$dst), (ins GR32:$src),
                 "rcl{l}\t{1, $dst|$dst, 1}", []>;
-def RCL32m1 : I<0xD1, MRM2m, (outs i32mem:$dst), (ins i32mem:$src),
-                "rcl{l}\t{1, $dst|$dst, 1}", []>;
 let Uses = [CL] in {
 def RCL32rCL : I<0xD3, MRM2r, (outs GR32:$dst), (ins GR32:$src),
                  "rcl{l}\t{%cl, $dst|$dst, CL}", []>;
-def RCL32mCL : I<0xD3, MRM2m, (outs i32mem:$dst), (ins i32mem:$src),
-                 "rcl{l}\t{%cl, $dst|$dst, CL}", []>;
 }
 def RCL32ri : Ii8<0xC1, MRM2r, (outs GR32:$dst), (ins GR32:$src, i8imm:$cnt),
                   "rcl{l}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCL32mi : Ii8<0xC1, MRM2m, (outs i32mem:$dst), 
-                  (ins i32mem:$src, i8imm:$cnt),
-                  "rcl{l}\t{$cnt, $dst|$dst, $cnt}", []>;
                   
 def RCR8r1 : I<0xD0, MRM3r, (outs GR8:$dst), (ins GR8:$src),
                "rcr{b}\t{1, $dst|$dst, 1}", []>;
-def RCR8m1 : I<0xD0, MRM3m, (outs i8mem:$dst), (ins i8mem:$src),
-               "rcr{b}\t{1, $dst|$dst, 1}", []>;
 let Uses = [CL] in {
 def RCR8rCL : I<0xD2, MRM3r, (outs GR8:$dst), (ins GR8:$src),
                 "rcr{b}\t{%cl, $dst|$dst, CL}", []>;
-def RCR8mCL : I<0xD2, MRM3m, (outs i8mem:$dst), (ins i8mem:$src),
-                "rcr{b}\t{%cl, $dst|$dst, CL}", []>;
 }
 def RCR8ri : Ii8<0xC0, MRM3r, (outs GR8:$dst), (ins GR8:$src, i8imm:$cnt),
                  "rcr{b}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCR8mi : Ii8<0xC0, MRM3m, (outs i8mem:$dst), (ins i8mem:$src, i8imm:$cnt),
-                 "rcr{b}\t{$cnt, $dst|$dst, $cnt}", []>;
   
 def RCR16r1 : I<0xD1, MRM3r, (outs GR16:$dst), (ins GR16:$src),
                 "rcr{w}\t{1, $dst|$dst, 1}", []>, OpSize;
-def RCR16m1 : I<0xD1, MRM3m, (outs i16mem:$dst), (ins i16mem:$src),
-                "rcr{w}\t{1, $dst|$dst, 1}", []>, OpSize;
 let Uses = [CL] in {
 def RCR16rCL : I<0xD3, MRM3r, (outs GR16:$dst), (ins GR16:$src),
                  "rcr{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
-def RCR16mCL : I<0xD3, MRM3m, (outs i16mem:$dst), (ins i16mem:$src),
-                 "rcr{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
 }
 def RCR16ri : Ii8<0xC1, MRM3r, (outs GR16:$dst), (ins GR16:$src, i8imm:$cnt),
                   "rcr{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
-def RCR16mi : Ii8<0xC1, MRM3m, (outs i16mem:$dst), 
-                  (ins i16mem:$src, i8imm:$cnt),
-                  "rcr{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
 
 def RCR32r1 : I<0xD1, MRM3r, (outs GR32:$dst), (ins GR32:$src),
                 "rcr{l}\t{1, $dst|$dst, 1}", []>;
-def RCR32m1 : I<0xD1, MRM3m, (outs i32mem:$dst), (ins i32mem:$src),
-                "rcr{l}\t{1, $dst|$dst, 1}", []>;
 let Uses = [CL] in {
 def RCR32rCL : I<0xD3, MRM3r, (outs GR32:$dst), (ins GR32:$src),
                  "rcr{l}\t{%cl, $dst|$dst, CL}", []>;
-def RCR32mCL : I<0xD3, MRM3m, (outs i32mem:$dst), (ins i32mem:$src),
-                 "rcr{l}\t{%cl, $dst|$dst, CL}", []>;
 }
 def RCR32ri : Ii8<0xC1, MRM3r, (outs GR32:$dst), (ins GR32:$src, i8imm:$cnt),
                   "rcr{l}\t{$cnt, $dst|$dst, $cnt}", []>;
-def RCR32mi : Ii8<0xC1, MRM3m, (outs i32mem:$dst), 
-                  (ins i32mem:$src, i8imm:$cnt),
+
+let isTwoAddress = 0 in {
+def RCL8m1 : I<0xD0, MRM2m, (outs), (ins i8mem:$dst),
+               "rcl{b}\t{1, $dst|$dst, 1}", []>;
+def RCL8mi : Ii8<0xC0, MRM2m, (outs), (ins i8mem:$dst, i8imm:$cnt),
+                 "rcl{b}\t{$cnt, $dst|$dst, $cnt}", []>;
+def RCL16m1 : I<0xD1, MRM2m, (outs), (ins i16mem:$dst),
+                "rcl{w}\t{1, $dst|$dst, 1}", []>, OpSize;
+def RCL16mi : Ii8<0xC1, MRM2m, (outs), (ins i16mem:$dst, i8imm:$cnt),
+                  "rcl{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
+def RCL32m1 : I<0xD1, MRM2m, (outs), (ins i32mem:$dst),
+                "rcl{l}\t{1, $dst|$dst, 1}", []>;
+def RCL32mi : Ii8<0xC1, MRM2m, (outs), (ins i32mem:$dst, i8imm:$cnt),
+                  "rcl{l}\t{$cnt, $dst|$dst, $cnt}", []>;
+def RCR8m1 : I<0xD0, MRM3m, (outs), (ins i8mem:$dst),
+               "rcr{b}\t{1, $dst|$dst, 1}", []>;
+def RCR8mi : Ii8<0xC0, MRM3m, (outs), (ins i8mem:$dst, i8imm:$cnt),
+                 "rcr{b}\t{$cnt, $dst|$dst, $cnt}", []>;
+def RCR16m1 : I<0xD1, MRM3m, (outs), (ins i16mem:$dst),
+                "rcr{w}\t{1, $dst|$dst, 1}", []>, OpSize;
+def RCR16mi : Ii8<0xC1, MRM3m, (outs), (ins i16mem:$dst, i8imm:$cnt),
+                  "rcr{w}\t{$cnt, $dst|$dst, $cnt}", []>, OpSize;
+def RCR32m1 : I<0xD1, MRM3m, (outs), (ins i32mem:$dst),
+                "rcr{l}\t{1, $dst|$dst, 1}", []>;
+def RCR32mi : Ii8<0xC1, MRM3m, (outs), (ins i32mem:$dst, i8imm:$cnt),
                   "rcr{l}\t{$cnt, $dst|$dst, $cnt}", []>;
 
+let Uses = [CL] in {
+def RCL8mCL : I<0xD2, MRM2m, (outs), (ins i8mem:$dst),
+                "rcl{b}\t{%cl, $dst|$dst, CL}", []>;
+def RCL16mCL : I<0xD3, MRM2m, (outs), (ins i16mem:$dst),
+                 "rcl{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
+def RCL32mCL : I<0xD3, MRM2m, (outs), (ins i32mem:$dst),
+                 "rcl{l}\t{%cl, $dst|$dst, CL}", []>;
+def RCR8mCL : I<0xD2, MRM3m, (outs), (ins i8mem:$dst),
+                "rcr{b}\t{%cl, $dst|$dst, CL}", []>;
+def RCR16mCL : I<0xD3, MRM3m, (outs), (ins i16mem:$dst),
+                 "rcr{w}\t{%cl, $dst|$dst, CL}", []>, OpSize;
+def RCR32mCL : I<0xD3, MRM3m, (outs), (ins i32mem:$dst),
+                 "rcr{l}\t{%cl, $dst|$dst, CL}", []>;
+}
+}
+
 // FIXME: provide shorter instructions when imm8 == 1
 let Uses = [CL] in {
 def ROL8rCL  : I<0xD2, MRM0r, (outs GR8 :$dst), (ins GR8 :$src),
@@ -3006,8 +3008,8 @@ let isTwoAddress = 0 in {
   def SBB32mr  : I<0x19, MRMDestMem, (outs), (ins i32mem:$dst, GR32:$src2), 
                    "sbb{l}\t{$src2, $dst|$dst, $src2}",
                    [(store (sube (load addr:$dst), GR32:$src2), addr:$dst)]>;
-  def SBB8mi  : Ii32<0x80, MRM3m, (outs), (ins i8mem:$dst, i8imm:$src2), 
-                      "sbb{b}\t{$src2, $dst|$dst, $src2}",
+  def SBB8mi  : Ii8<0x80, MRM3m, (outs), (ins i8mem:$dst, i8imm:$src2), 
+                    "sbb{b}\t{$src2, $dst|$dst, $src2}",
                    [(store (sube (loadi8 addr:$dst), imm:$src2), addr:$dst)]>;
   def SBB16mi  : Ii16<0x81, MRM3m, (outs), (ins i16mem:$dst, i16imm:$src2), 
                       "sbb{w}\t{$src2, $dst|$dst, $src2}",
@@ -3234,17 +3236,18 @@ def LAHF     : I<0x9F, RawFrm, (outs),  (ins), "lahf", []>;  // AH = flags
 
 let Uses = [EFLAGS] in {
 // Use sbb to materialize carry bit.
-
 let Defs = [EFLAGS], isCodeGenOnly = 1 in {
-def SETB_C8r : I<0x18, MRMInitReg, (outs GR8:$dst), (ins),
-                 "sbb{b}\t$dst, $dst",
+// FIXME: These are pseudo ops that should be replaced with Pat<> patterns.
+// However, Pat<> can't replicate the destination reg into the inputs of the
+// result.
+// FIXME: Change these to have encoding Pseudo when X86MCCodeEmitter replaces
+// X86CodeEmitter.
+def SETB_C8r : I<0x18, MRMInitReg, (outs GR8:$dst), (ins), "",
                  [(set GR8:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
-def SETB_C16r : I<0x19, MRMInitReg, (outs GR16:$dst), (ins),
-                  "sbb{w}\t$dst, $dst",
+def SETB_C16r : I<0x19, MRMInitReg, (outs GR16:$dst), (ins), "",
                  [(set GR16:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>,
                 OpSize;
-def SETB_C32r : I<0x19, MRMInitReg, (outs GR32:$dst), (ins),
-                  "sbb{l}\t$dst, $dst",
+def SETB_C32r : I<0x19, MRMInitReg, (outs GR32:$dst), (ins), "",
                  [(set GR32:$dst, (X86setcc_c X86_COND_B, EFLAGS))]>;
 } // isCodeGenOnly
 
@@ -3681,7 +3684,7 @@ def MOVZX32rm16: I<0xB7, MRMSrcMem, (outs GR32:$dst), (ins i16mem:$src),
                    "movz{wl|x}\t{$src, $dst|$dst, $src}",
                    [(set GR32:$dst, (zextloadi32i16 addr:$src))]>, TB;
 
-// These are the same as the regular regular MOVZX32rr8 and MOVZX32rm8
+// These are the same as the regular MOVZX32rr8 and MOVZX32rm8
 // except that they use GR32_NOREX for the output operand register class
 // instead of GR32. This allows them to operate on h registers on x86-64.
 def MOVZX32_NOREXrr8 : I<0xB6, MRMSrcReg,
@@ -3716,10 +3719,10 @@ let neverHasSideEffects = 1 in {
 
 // Alias instructions that map movr0 to xor.
 // FIXME: remove when we can teach regalloc that xor reg, reg is ok.
+// FIXME: Set encoding to pseudo.
 let Defs = [EFLAGS], isReMaterializable = 1, isAsCheapAsAMove = 1,
     isCodeGenOnly = 1 in {
-def MOV8r0   : I<0x30, MRMInitReg, (outs GR8 :$dst), (ins),
-                 "xor{b}\t$dst, $dst",
+def MOV8r0   : I<0x30, MRMInitReg, (outs GR8 :$dst), (ins), "",
                  [(set GR8:$dst, 0)]>;
 
 // We want to rewrite MOV16r0 in terms of MOV32r0, because it's a smaller
@@ -3731,8 +3734,8 @@ def MOV16r0   : I<0x31, MRMInitReg, (outs GR16:$dst), (ins),
                  "",
                  [(set GR16:$dst, 0)]>, OpSize;
                  
-def MOV32r0  : I<0x31, MRMInitReg, (outs GR32:$dst), (ins),
-                 "xor{l}\t$dst, $dst",
+// FIXME: Set encoding to pseudo.
+def MOV32r0  : I<0x31, MRMInitReg, (outs GR32:$dst), (ins), "",
                  [(set GR32:$dst, 0)]>;
 }
 
@@ -4077,7 +4080,7 @@ def LSL32rm : I<0x03, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
 def LSL32rr : I<0x03, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
                 "lsl{l}\t{$src, $dst|$dst, $src}", []>, TB;
                 
-def INVLPG : I<0x01, RawFrm, (outs), (ins), "invlpg", []>, TB;
+def INVLPG : I<0x01, MRM7m, (outs), (ins i8mem:$addr), "invlpg\t$addr", []>, TB;
 
 def STRr : I<0x00, MRM1r, (outs GR16:$dst), (ins),
              "str{w}\t{$dst}", []>, TB;
@@ -4155,6 +4158,26 @@ def LLDT16r : I<0x00, MRM2r, (outs), (ins GR16:$src),
 def LLDT16m : I<0x00, MRM2m, (outs), (ins i16mem:$src),
                 "lldt{w}\t$src", []>, TB;
                 
+// Lock instruction prefix
+def LOCK_PREFIX : I<0xF0, RawFrm, (outs),  (ins), "lock", []>;
+
+// Repeat string operation instruction prefixes
+// These uses the DF flag in the EFLAGS register to inc or dec ECX
+let Defs = [ECX], Uses = [ECX,EFLAGS] in {
+// Repeat (used with INS, OUTS, MOVS, LODS and STOS)
+def REP_PREFIX : I<0xF3, RawFrm, (outs),  (ins), "rep", []>;
+// Repeat while not equal (used with CMPS and SCAS)
+def REPNE_PREFIX : I<0xF2, RawFrm, (outs),  (ins), "repne", []>;
+}
+
+// Segment override instruction prefixes
+def CS_PREFIX : I<0x2E, RawFrm, (outs),  (ins), "cs", []>;
+def SS_PREFIX : I<0x36, RawFrm, (outs),  (ins), "ss", []>;
+def DS_PREFIX : I<0x3E, RawFrm, (outs),  (ins), "ds", []>;
+def ES_PREFIX : I<0x26, RawFrm, (outs),  (ins), "es", []>;
+def FS_PREFIX : I<0x64, RawFrm, (outs),  (ins), "fs", []>;
+def GS_PREFIX : I<0x65, RawFrm, (outs),  (ins), "gs", []>;
+
 // String manipulation instructions
 
 def LODSB : I<0xAC, RawFrm, (outs), (ins), "lodsb", []>;
@@ -4219,17 +4242,17 @@ def WBINVD : I<0x09, RawFrm, (outs), (ins), "wbinvd", []>, TB;
 // VMX instructions
 
 // 66 0F 38 80
-def INVEPT : I<0x38, RawFrm, (outs), (ins), "invept", []>, OpSize, TB;
+def INVEPT : I<0x80, RawFrm, (outs), (ins), "invept", []>, OpSize, T8;
 // 66 0F 38 81
-def INVVPID : I<0x38, RawFrm, (outs), (ins), "invvpid", []>, OpSize, TB;
+def INVVPID : I<0x81, RawFrm, (outs), (ins), "invvpid", []>, OpSize, T8;
 // 0F 01 C1
-def VMCALL : I<0x01, RawFrm, (outs), (ins), "vmcall", []>, TB;
+def VMCALL : I<0x01, MRM_C1, (outs), (ins), "vmcall", []>, TB;
 def VMCLEARm : I<0xC7, MRM6m, (outs), (ins i64mem:$vmcs),
   "vmclear\t$vmcs", []>, OpSize, TB;
 // 0F 01 C2
-def VMLAUNCH : I<0x01, RawFrm, (outs), (ins), "vmlaunch", []>, TB;
+def VMLAUNCH : I<0x01, MRM_C2, (outs), (ins), "vmlaunch", []>, TB;
 // 0F 01 C3
-def VMRESUME : I<0x01, RawFrm, (outs), (ins), "vmresume", []>, TB;
+def VMRESUME : I<0x01, MRM_C3, (outs), (ins), "vmresume", []>, TB;
 def VMPTRLDm : I<0xC7, MRM6m, (outs), (ins i64mem:$vmcs),
   "vmptrld\t$vmcs", []>, TB;
 def VMPTRSTm : I<0xC7, MRM7m, (outs i64mem:$vmcs), (ins),
@@ -4251,7 +4274,7 @@ def VMWRITE32rm : I<0x79, MRMSrcMem, (outs GR32:$dst), (ins i32mem:$src),
 def VMWRITE32rr : I<0x79, MRMSrcReg, (outs GR32:$dst), (ins GR32:$src),
   "vmwrite{l}\t{$src, $dst|$dst, $src}", []>, TB;
 // 0F 01 C4
-def VMXOFF : I<0x01, RawFrm, (outs), (ins), "vmxoff", []>, OpSize;
+def VMXOFF : I<0x01, MRM_C4, (outs), (ins), "vmxoff", []>, TB;
 def VMXON : I<0xC7, MRM6m, (outs), (ins i64mem:$vmxon),
   "vmxon\t{$vmxon}", []>, XD;
 
@@ -5181,6 +5204,12 @@ include "X86InstrFPStack.td"
 include "X86Instr64bit.td"
 
 //===----------------------------------------------------------------------===//
+// SIMD support (SSE, MMX and AVX)
+//===----------------------------------------------------------------------===//
+
+include "X86InstrFragmentsSIMD.td"
+
+//===----------------------------------------------------------------------===//
 // XMM Floating point support (requires SSE / SSE2)
 //===----------------------------------------------------------------------===//
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
index ab169ac..89f020c 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrMMX.td
@@ -14,56 +14,6 @@
 //===----------------------------------------------------------------------===//
 
 //===----------------------------------------------------------------------===//
-// MMX Pattern Fragments
-//===----------------------------------------------------------------------===//
-
-def load_mmx : PatFrag<(ops node:$ptr), (v1i64 (load node:$ptr))>;
-
-def bc_v8i8  : PatFrag<(ops node:$in), (v8i8  (bitconvert node:$in))>;
-def bc_v4i16 : PatFrag<(ops node:$in), (v4i16 (bitconvert node:$in))>;
-def bc_v2i32 : PatFrag<(ops node:$in), (v2i32 (bitconvert node:$in))>;
-def bc_v1i64 : PatFrag<(ops node:$in), (v1i64 (bitconvert node:$in))>;
-
-//===----------------------------------------------------------------------===//
-// MMX Masks
-//===----------------------------------------------------------------------===//
-
-// MMX_SHUFFLE_get_shuf_imm xform function: convert vector_shuffle mask to
-// PSHUFW imm.
-def MMX_SHUFFLE_get_shuf_imm : SDNodeXForm<vector_shuffle, [{
-  return getI8Imm(X86::getShuffleSHUFImmediate(N));
-}]>;
-
-// Patterns for: vector_shuffle v1, v2, <2, 6, 3, 7, ...>
-def mmx_unpckh : PatFrag<(ops node:$lhs, node:$rhs),
-                         (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isUNPCKHMask(cast<ShuffleVectorSDNode>(N));
-}]>;
-
-// Patterns for: vector_shuffle v1, v2, <0, 4, 2, 5, ...>
-def mmx_unpckl : PatFrag<(ops node:$lhs, node:$rhs),
-                         (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isUNPCKLMask(cast<ShuffleVectorSDNode>(N));
-}]>;
-
-// Patterns for: vector_shuffle v1, <undef>, <0, 0, 1, 1, ...>
-def mmx_unpckh_undef : PatFrag<(ops node:$lhs, node:$rhs),
-                               (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isUNPCKH_v_undef_Mask(cast<ShuffleVectorSDNode>(N));
-}]>;
-
-// Patterns for: vector_shuffle v1, <undef>, <2, 2, 3, 3, ...>
-def mmx_unpckl_undef : PatFrag<(ops node:$lhs, node:$rhs),
-                               (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isUNPCKL_v_undef_Mask(cast<ShuffleVectorSDNode>(N));
-}]>;
-
-def mmx_pshufw : PatFrag<(ops node:$lhs, node:$rhs),
-                         (vector_shuffle node:$lhs, node:$rhs), [{
-  return X86::isPSHUFDMask(cast<ShuffleVectorSDNode>(N));
-}], MMX_SHUFFLE_get_shuf_imm>;
-
-//===----------------------------------------------------------------------===//
 // MMX Multiclasses
 //===----------------------------------------------------------------------===//
 
@@ -536,11 +486,10 @@ def MMX_MASKMOVQ64: MMXI64<0xF7, MRMSrcReg, (outs), (ins VR64:$src, VR64:$mask),
 
 // Alias instructions that map zero vector to pxor.
 let isReMaterializable = 1, isCodeGenOnly = 1 in {
-  def MMX_V_SET0       : MMXI<0xEF, MRMInitReg, (outs VR64:$dst), (ins),
-                              "pxor\t$dst, $dst",
+  // FIXME: Change encoding to pseudo.
+  def MMX_V_SET0       : MMXI<0xEF, MRMInitReg, (outs VR64:$dst), (ins), "",
                               [(set VR64:$dst, (v2i32 immAllZerosV))]>;
-  def MMX_V_SETALLONES : MMXI<0x76, MRMInitReg, (outs VR64:$dst), (ins),
-                              "pcmpeqd\t$dst, $dst",
+  def MMX_V_SETALLONES : MMXI<0x76, MRMInitReg, (outs VR64:$dst), (ins), "",
                               [(set VR64:$dst, (v2i32 immAllOnesV))]>;
 }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
index 94b9b55..9b2140f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86InstrSSE.td
@@ -505,9 +505,10 @@ def Int_COMISSrm: PSI<0x2F, MRMSrcMem, (outs), (ins VR128:$src1, f128mem:$src2),
 // Alias instructions that map fld0 to pxor for sse.
 let isReMaterializable = 1, isAsCheapAsAMove = 1, isCodeGenOnly = 1,
     canFoldAsLoad = 1 in
-def FsFLD0SS : I<0xEF, MRMInitReg, (outs FR32:$dst), (ins),
-                 "pxor\t$dst, $dst", [(set FR32:$dst, fp32imm0)]>,
-               Requires<[HasSSE1]>, TB, OpSize;
+  // FIXME: Set encoding to pseudo!
+def FsFLD0SS : I<0xEF, MRMInitReg, (outs FR32:$dst), (ins), "",
+                 [(set FR32:$dst, fp32imm0)]>,
+                 Requires<[HasSSE1]>, TB, OpSize;
 
 // Alias instruction to do FR32 reg-to-reg copy using movaps. Upper bits are
 // disregarded.
@@ -761,6 +762,9 @@ let Constraints = "$src1 = $dst" in {
 } // Constraints = "$src1 = $dst"
 
 
+def : Pat<(movlhps VR128:$src1, (bc_v4i32 (v2i64 (X86vzload addr:$src2)))),
+          (MOVHPSrm VR128:$src1, addr:$src2)>;
+
 def MOVLPSmr : PSI<0x13, MRMDestMem, (outs), (ins f64mem:$dst, VR128:$src),
                    "movlps\t{$src, $dst|$dst, $src}",
                    [(store (f64 (vector_extract (bc_v2f64 (v4f32 VR128:$src)),
@@ -1025,10 +1029,10 @@ def STMXCSR : PSI<0xAE, MRM3m, (outs), (ins i32mem:$dst),
 // Alias instructions that map zero vector to pxor / xorp* for sse.
 // We set canFoldAsLoad because this can be converted to a constant-pool
 // load of an all-zeros value if folding it would be beneficial.
+// FIXME: Change encoding to pseudo!
 let isReMaterializable = 1, isAsCheapAsAMove = 1, canFoldAsLoad = 1,
     isCodeGenOnly = 1 in
-def V_SET0 : PSI<0x57, MRMInitReg, (outs VR128:$dst), (ins),
-                 "xorps\t$dst, $dst",
+def V_SET0 : PSI<0x57, MRMInitReg, (outs VR128:$dst), (ins), "",
                  [(set VR128:$dst, (v4i32 immAllZerosV))]>;
 
 let Predicates = [HasSSE1] in {
@@ -1269,8 +1273,8 @@ def Int_COMISDrm: PDI<0x2F, MRMSrcMem, (outs), (ins VR128:$src1, f128mem:$src2),
 // Alias instructions that map fld0 to pxor for sse.
 let isReMaterializable = 1, isAsCheapAsAMove = 1, isCodeGenOnly = 1,
     canFoldAsLoad = 1 in
-def FsFLD0SD : I<0xEF, MRMInitReg, (outs FR64:$dst), (ins),
-                 "pxor\t$dst, $dst", [(set FR64:$dst, fpimm0)]>,
+def FsFLD0SD : I<0xEF, MRMInitReg, (outs FR64:$dst), (ins), "",
+                 [(set FR64:$dst, fpimm0)]>,
                Requires<[HasSSE2]>, TB, OpSize;
 
 // Alias instruction to do FR64 reg-to-reg copy using movapd. Upper bits are
@@ -2311,9 +2315,9 @@ def CLFLUSH : I<0xAE, MRM7m, (outs), (ins i8mem:$src),
               TB, Requires<[HasSSE2]>;
 
 // Load, store, and memory fence
-def LFENCE : I<0xAE, MRM5r, (outs), (ins),
+def LFENCE : I<0xAE, MRM_E8, (outs), (ins),
                "lfence", [(int_x86_sse2_lfence)]>, TB, Requires<[HasSSE2]>;
-def MFENCE : I<0xAE, MRM6r, (outs), (ins),
+def MFENCE : I<0xAE, MRM_F0, (outs), (ins),
                "mfence", [(int_x86_sse2_mfence)]>, TB, Requires<[HasSSE2]>;
 
 //TODO: custom lower this so as to never even generate the noop
@@ -2329,8 +2333,8 @@ def : Pat<(membarrier (i8 imm:$ll), (i8 imm:$ls), (i8 imm:$sl), (i8 imm:$ss),
 // load of an all-ones value if folding it would be beneficial.
 let isReMaterializable = 1, isAsCheapAsAMove = 1, canFoldAsLoad = 1,
     isCodeGenOnly = 1 in
-  def V_SETALLONES : PDI<0x76, MRMInitReg, (outs VR128:$dst), (ins),
-                         "pcmpeqd\t$dst, $dst",
+  // FIXME: Change encoding to pseudo.
+  def V_SETALLONES : PDI<0x76, MRMInitReg, (outs VR128:$dst), (ins), "",
                          [(set VR128:$dst, (v4i32 immAllOnesV))]>;
 
 // FR64 to 128-bit vector conversion.
@@ -2612,9 +2616,9 @@ let Constraints = "$src1 = $dst" in {
 }
 
 // Thread synchronization
-def MONITOR : I<0x01, MRM1r, (outs), (ins), "monitor",
+def MONITOR : I<0x01, MRM_C8, (outs), (ins), "monitor",
                 [(int_x86_sse3_monitor EAX, ECX, EDX)]>,TB, Requires<[HasSSE3]>;
-def MWAIT   : I<0x01, MRM1r, (outs), (ins), "mwait",
+def MWAIT   : I<0x01, MRM_C9, (outs), (ins), "mwait",
                 [(int_x86_sse3_mwait ECX, EAX)]>, TB, Requires<[HasSSE3]>;
 
 // vector_shuffle v1, <undef> <1, 1, 3, 3>
@@ -3746,7 +3750,8 @@ def PTESTrm : SS48I<0x17, MRMSrcMem, (outs), (ins VR128:$src1, i128mem:$src2),
 
 def MOVNTDQArm : SS48I<0x2A, MRMSrcMem, (outs VR128:$dst), (ins i128mem:$src),
                        "movntdqa\t{$src, $dst|$dst, $src}",
-                       [(set VR128:$dst, (int_x86_sse41_movntdqa addr:$src))]>;
+                       [(set VR128:$dst, (int_x86_sse41_movntdqa addr:$src))]>,
+                       OpSize;
 
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
index f363903..d297d24 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86JITInfo.cpp
@@ -297,6 +297,7 @@ extern "C" {
       push  edx
       push  ecx
       and   esp, -16
+      sub   esp, 16
       mov   eax, dword ptr [ebp+4]
       mov   dword ptr [esp+4], eax
       mov   dword ptr [esp], ebp
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.cpp
index c0cab86..91c0fbb 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.cpp
@@ -72,7 +72,6 @@ X86ELFMCAsmInfo::X86ELFMCAsmInfo(const Triple &Triple) {
 
   PrivateGlobalPrefix = ".L";
   WeakRefDirective = "\t.weak\t";
-  SetDirective = "\t.set\t";
   PCSymbol = ".";
 
   // Set up DWARF directives
@@ -95,27 +94,4 @@ MCSection *X86ELFMCAsmInfo::getNonexecutableStackSection(MCContext &Ctx) const {
 X86MCAsmInfoCOFF::X86MCAsmInfoCOFF(const Triple &Triple) {
   AsmTransCBE = x86_asm_table;
   AssemblerDialect = AsmWriterFlavor;
-}
-
-
-X86WinMCAsmInfo::X86WinMCAsmInfo(const Triple &Triple) {
-  AsmTransCBE = x86_asm_table;
-  AssemblerDialect = AsmWriterFlavor;
-
-  GlobalPrefix = "_";
-  CommentString = ";";
-
-  PrivateGlobalPrefix = "$";
-  AlignDirective = "\tALIGN\t";
-  ZeroDirective = "\tdb\t";
-  AsciiDirective = "\tdb\t";
-  AscizDirective = 0;
-  Data8bitsDirective = "\tdb\t";
-  Data16bitsDirective = "\tdw\t";
-  Data32bitsDirective = "\tdd\t";
-  Data64bitsDirective = "\tdq\t";
-  HasDotTypeDotSizeDirective = false;
-  HasSingleParameterDotFile = false;
-
-  AlignmentIsInBytes = true;
-}
+}
\ No newline at end of file
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.h
index ca227b7..69716bf 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MCAsmInfo.h
@@ -33,11 +33,6 @@ namespace llvm {
   struct X86MCAsmInfoCOFF : public MCAsmInfoCOFF {
     explicit X86MCAsmInfoCOFF(const Triple &Triple);
   };
-
-  struct X86WinMCAsmInfo : public MCAsmInfo {
-    explicit X86WinMCAsmInfo(const Triple &Triple);
-  };
-
 } // namespace llvm
 
 #endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MCCodeEmitter.cpp b/libclamav/c++/llvm/lib/Target/X86/X86MCCodeEmitter.cpp
new file mode 100644
index 0000000..d0ec0de
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MCCodeEmitter.cpp
@@ -0,0 +1,635 @@
+//===-- X86/X86MCCodeEmitter.cpp - Convert X86 code to machine code -------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements the X86MCCodeEmitter class.
+//
+//===----------------------------------------------------------------------===//
+
+#define DEBUG_TYPE "x86-emitter"
+#include "X86.h"
+#include "X86InstrInfo.h"
+#include "X86FixupKinds.h"
+#include "llvm/MC/MCCodeEmitter.h"
+#include "llvm/MC/MCExpr.h"
+#include "llvm/MC/MCInst.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+namespace {
+class X86MCCodeEmitter : public MCCodeEmitter {
+  X86MCCodeEmitter(const X86MCCodeEmitter &); // DO NOT IMPLEMENT
+  void operator=(const X86MCCodeEmitter &); // DO NOT IMPLEMENT
+  const TargetMachine &TM;
+  const TargetInstrInfo &TII;
+  MCContext &Ctx;
+  bool Is64BitMode;
+public:
+  X86MCCodeEmitter(TargetMachine &tm, MCContext &ctx, bool is64Bit) 
+    : TM(tm), TII(*TM.getInstrInfo()), Ctx(ctx) {
+    Is64BitMode = is64Bit;
+  }
+
+  ~X86MCCodeEmitter() {}
+
+  unsigned getNumFixupKinds() const {
+    return 3;
+  }
+
+  const MCFixupKindInfo &getFixupKindInfo(MCFixupKind Kind) const {
+    const static MCFixupKindInfo Infos[] = {
+      { "reloc_pcrel_4byte", 0, 4 * 8 },
+      { "reloc_pcrel_1byte", 0, 1 * 8 },
+      { "reloc_riprel_4byte", 0, 4 * 8 }
+    };
+    
+    if (Kind < FirstTargetFixupKind)
+      return MCCodeEmitter::getFixupKindInfo(Kind);
+
+    assert(unsigned(Kind - FirstTargetFixupKind) < getNumFixupKinds() &&
+           "Invalid kind!");
+    return Infos[Kind - FirstTargetFixupKind];
+  }
+  
+  static unsigned GetX86RegNum(const MCOperand &MO) {
+    return X86RegisterInfo::getX86RegNum(MO.getReg());
+  }
+  
+  void EmitByte(unsigned char C, unsigned &CurByte, raw_ostream &OS) const {
+    OS << (char)C;
+    ++CurByte;
+  }
+  
+  void EmitConstant(uint64_t Val, unsigned Size, unsigned &CurByte,
+                    raw_ostream &OS) const {
+    // Output the constant in little endian byte order.
+    for (unsigned i = 0; i != Size; ++i) {
+      EmitByte(Val & 255, CurByte, OS);
+      Val >>= 8;
+    }
+  }
+
+  void EmitImmediate(const MCOperand &Disp, 
+                     unsigned ImmSize, MCFixupKind FixupKind,
+                     unsigned &CurByte, raw_ostream &OS,
+                     SmallVectorImpl<MCFixup> &Fixups,
+                     int ImmOffset = 0) const;
+  
+  inline static unsigned char ModRMByte(unsigned Mod, unsigned RegOpcode,
+                                        unsigned RM) {
+    assert(Mod < 4 && RegOpcode < 8 && RM < 8 && "ModRM Fields out of range!");
+    return RM | (RegOpcode << 3) | (Mod << 6);
+  }
+  
+  void EmitRegModRMByte(const MCOperand &ModRMReg, unsigned RegOpcodeFld,
+                        unsigned &CurByte, raw_ostream &OS) const {
+    EmitByte(ModRMByte(3, RegOpcodeFld, GetX86RegNum(ModRMReg)), CurByte, OS);
+  }
+  
+  void EmitSIBByte(unsigned SS, unsigned Index, unsigned Base,
+                   unsigned &CurByte, raw_ostream &OS) const {
+    // SIB byte is in the same format as the ModRMByte.
+    EmitByte(ModRMByte(SS, Index, Base), CurByte, OS);
+  }
+  
+  
+  void EmitMemModRMByte(const MCInst &MI, unsigned Op,
+                        unsigned RegOpcodeField, 
+                        unsigned TSFlags, unsigned &CurByte, raw_ostream &OS,
+                        SmallVectorImpl<MCFixup> &Fixups) const;
+  
+  void EncodeInstruction(const MCInst &MI, raw_ostream &OS,
+                         SmallVectorImpl<MCFixup> &Fixups) const;
+  
+};
+
+} // end anonymous namespace
+
+
+MCCodeEmitter *llvm::createX86_32MCCodeEmitter(const Target &,
+                                               TargetMachine &TM,
+                                               MCContext &Ctx) {
+  return new X86MCCodeEmitter(TM, Ctx, false);
+}
+
+MCCodeEmitter *llvm::createX86_64MCCodeEmitter(const Target &,
+                                               TargetMachine &TM,
+                                               MCContext &Ctx) {
+  return new X86MCCodeEmitter(TM, Ctx, true);
+}
+
+
+/// isDisp8 - Return true if this signed displacement fits in a 8-bit 
+/// sign-extended field. 
+static bool isDisp8(int Value) {
+  return Value == (signed char)Value;
+}
+
+/// getImmFixupKind - Return the appropriate fixup kind to use for an immediate
+/// in an instruction with the specified TSFlags.
+static MCFixupKind getImmFixupKind(unsigned TSFlags) {
+  unsigned Size = X86II::getSizeOfImm(TSFlags);
+  bool isPCRel = X86II::isImmPCRel(TSFlags);
+  
+  switch (Size) {
+  default: assert(0 && "Unknown immediate size");
+  case 1: return isPCRel ? MCFixupKind(X86::reloc_pcrel_1byte) : FK_Data_1;
+  case 4: return isPCRel ? MCFixupKind(X86::reloc_pcrel_4byte) : FK_Data_4;
+  case 2: assert(!isPCRel); return FK_Data_2;
+  case 8: assert(!isPCRel); return FK_Data_8;
+  }
+}
+
+
+void X86MCCodeEmitter::
+EmitImmediate(const MCOperand &DispOp, unsigned Size, MCFixupKind FixupKind,
+              unsigned &CurByte, raw_ostream &OS,
+              SmallVectorImpl<MCFixup> &Fixups, int ImmOffset) const {
+  // If this is a simple integer displacement that doesn't require a relocation,
+  // emit it now.
+  if (DispOp.isImm()) {
+    EmitConstant(DispOp.getImm()+ImmOffset, Size, CurByte, OS);
+    return;
+  }
+
+  // If we have an immoffset, add it to the expression.
+  const MCExpr *Expr = DispOp.getExpr();
+  if (ImmOffset)
+    Expr = MCBinaryExpr::CreateAdd(Expr,MCConstantExpr::Create(ImmOffset, Ctx),
+                                   Ctx);
+  
+  // Emit a symbolic constant as a fixup and 4 zeros.
+  Fixups.push_back(MCFixup::Create(CurByte, Expr, FixupKind));
+  EmitConstant(0, Size, CurByte, OS);
+}
+
+
+void X86MCCodeEmitter::EmitMemModRMByte(const MCInst &MI, unsigned Op,
+                                        unsigned RegOpcodeField,
+                                        unsigned TSFlags, unsigned &CurByte,
+                                        raw_ostream &OS,
+                                        SmallVectorImpl<MCFixup> &Fixups) const{
+  const MCOperand &Disp     = MI.getOperand(Op+3);
+  const MCOperand &Base     = MI.getOperand(Op);
+  const MCOperand &Scale    = MI.getOperand(Op+1);
+  const MCOperand &IndexReg = MI.getOperand(Op+2);
+  unsigned BaseReg = Base.getReg();
+  
+  // Handle %rip relative addressing.
+  if (BaseReg == X86::RIP) {    // [disp32+RIP] in X86-64 mode
+    assert(IndexReg.getReg() == 0 && Is64BitMode &&
+           "Invalid rip-relative address");
+    EmitByte(ModRMByte(0, RegOpcodeField, 5), CurByte, OS);
+    
+    // rip-relative addressing is actually relative to the *next* instruction.
+    // Since an immediate can follow the mod/rm byte for an instruction, this
+    // means that we need to bias the immediate field of the instruction with
+    // the size of the immediate field.  If we have this case, add it into the
+    // expression to emit.
+    int ImmSize = X86II::hasImm(TSFlags) ? X86II::getSizeOfImm(TSFlags) : 0;
+    EmitImmediate(Disp, 4, MCFixupKind(X86::reloc_riprel_4byte),
+                  CurByte, OS, Fixups, -ImmSize);
+    return;
+  }
+  
+  unsigned BaseRegNo = BaseReg ? GetX86RegNum(Base) : -1U;
+  
+  // Determine whether a SIB byte is needed.
+  // If no BaseReg, issue a RIP relative instruction only if the MCE can 
+  // resolve addresses on-the-fly, otherwise use SIB (Intel Manual 2A, table
+  // 2-7) and absolute references.
+
+  if (// The SIB byte must be used if there is an index register.
+      IndexReg.getReg() == 0 && 
+      // The SIB byte must be used if the base is ESP/RSP/R12, all of which
+      // encode to an R/M value of 4, which indicates that a SIB byte is
+      // present.
+      BaseRegNo != N86::ESP &&
+      // If there is no base register and we're in 64-bit mode, we need a SIB
+      // byte to emit an addr that is just 'disp32' (the non-RIP relative form).
+      (!Is64BitMode || BaseReg != 0)) {
+
+    if (BaseReg == 0) {          // [disp32]     in X86-32 mode
+      EmitByte(ModRMByte(0, RegOpcodeField, 5), CurByte, OS);
+      EmitImmediate(Disp, 4, FK_Data_4, CurByte, OS, Fixups);
+      return;
+    }
+    
+    // If the base is not EBP/ESP and there is no displacement, use simple
+    // indirect register encoding, this handles addresses like [EAX].  The
+    // encoding for [EBP] with no displacement means [disp32] so we handle it
+    // by emitting a displacement of 0 below.
+    if (Disp.isImm() && Disp.getImm() == 0 && BaseRegNo != N86::EBP) {
+      EmitByte(ModRMByte(0, RegOpcodeField, BaseRegNo), CurByte, OS);
+      return;
+    }
+    
+    // Otherwise, if the displacement fits in a byte, encode as [REG+disp8].
+    if (Disp.isImm() && isDisp8(Disp.getImm())) {
+      EmitByte(ModRMByte(1, RegOpcodeField, BaseRegNo), CurByte, OS);
+      EmitImmediate(Disp, 1, FK_Data_1, CurByte, OS, Fixups);
+      return;
+    }
+    
+    // Otherwise, emit the most general non-SIB encoding: [REG+disp32]
+    EmitByte(ModRMByte(2, RegOpcodeField, BaseRegNo), CurByte, OS);
+    EmitImmediate(Disp, 4, FK_Data_4, CurByte, OS, Fixups);
+    return;
+  }
+    
+  // We need a SIB byte, so start by outputting the ModR/M byte first
+  assert(IndexReg.getReg() != X86::ESP &&
+         IndexReg.getReg() != X86::RSP && "Cannot use ESP as index reg!");
+  
+  bool ForceDisp32 = false;
+  bool ForceDisp8  = false;
+  if (BaseReg == 0) {
+    // If there is no base register, we emit the special case SIB byte with
+    // MOD=0, BASE=5, to JUST get the index, scale, and displacement.
+    EmitByte(ModRMByte(0, RegOpcodeField, 4), CurByte, OS);
+    ForceDisp32 = true;
+  } else if (!Disp.isImm()) {
+    // Emit the normal disp32 encoding.
+    EmitByte(ModRMByte(2, RegOpcodeField, 4), CurByte, OS);
+    ForceDisp32 = true;
+  } else if (Disp.getImm() == 0 && BaseReg != X86::EBP) {
+    // Emit no displacement ModR/M byte
+    EmitByte(ModRMByte(0, RegOpcodeField, 4), CurByte, OS);
+  } else if (isDisp8(Disp.getImm())) {
+    // Emit the disp8 encoding.
+    EmitByte(ModRMByte(1, RegOpcodeField, 4), CurByte, OS);
+    ForceDisp8 = true;           // Make sure to force 8 bit disp if Base=EBP
+  } else {
+    // Emit the normal disp32 encoding.
+    EmitByte(ModRMByte(2, RegOpcodeField, 4), CurByte, OS);
+  }
+  
+  // Calculate what the SS field value should be...
+  static const unsigned SSTable[] = { ~0, 0, 1, ~0, 2, ~0, ~0, ~0, 3 };
+  unsigned SS = SSTable[Scale.getImm()];
+  
+  if (BaseReg == 0) {
+    // Handle the SIB byte for the case where there is no base, see Intel 
+    // Manual 2A, table 2-7. The displacement has already been output.
+    unsigned IndexRegNo;
+    if (IndexReg.getReg())
+      IndexRegNo = GetX86RegNum(IndexReg);
+    else // Examples: [ESP+1*<noreg>+4] or [scaled idx]+disp32 (MOD=0,BASE=5)
+      IndexRegNo = 4;
+    EmitSIBByte(SS, IndexRegNo, 5, CurByte, OS);
+  } else {
+    unsigned IndexRegNo;
+    if (IndexReg.getReg())
+      IndexRegNo = GetX86RegNum(IndexReg);
+    else
+      IndexRegNo = 4;   // For example [ESP+1*<noreg>+4]
+    EmitSIBByte(SS, IndexRegNo, GetX86RegNum(Base), CurByte, OS);
+  }
+  
+  // Do we need to output a displacement?
+  if (ForceDisp8)
+    EmitImmediate(Disp, 1, FK_Data_1, CurByte, OS, Fixups);
+  else if (ForceDisp32 || Disp.getImm() != 0)
+    EmitImmediate(Disp, 4, FK_Data_4, CurByte, OS, Fixups);
+}
+
+/// DetermineREXPrefix - Determine if the MCInst has to be encoded with a X86-64
+/// REX prefix which specifies 1) 64-bit instructions, 2) non-default operand
+/// size, and 3) use of X86-64 extended registers.
+static unsigned DetermineREXPrefix(const MCInst &MI, unsigned TSFlags,
+                                   const TargetInstrDesc &Desc) {
+  // Pseudo instructions never have a rex byte.
+  if ((TSFlags & X86II::FormMask) == X86II::Pseudo)
+    return 0;
+  
+  unsigned REX = 0;
+  if (TSFlags & X86II::REX_W)
+    REX |= 1 << 3;
+  
+  if (MI.getNumOperands() == 0) return REX;
+  
+  unsigned NumOps = MI.getNumOperands();
+  // FIXME: MCInst should explicitize the two-addrness.
+  bool isTwoAddr = NumOps > 1 &&
+                      Desc.getOperandConstraint(1, TOI::TIED_TO) != -1;
+  
+  // If it accesses SPL, BPL, SIL, or DIL, then it requires a 0x40 REX prefix.
+  unsigned i = isTwoAddr ? 1 : 0;
+  for (; i != NumOps; ++i) {
+    const MCOperand &MO = MI.getOperand(i);
+    if (!MO.isReg()) continue;
+    unsigned Reg = MO.getReg();
+    if (!X86InstrInfo::isX86_64NonExtLowByteReg(Reg)) continue;
+    // FIXME: The caller of DetermineREXPrefix slaps this prefix onto anything
+    // that returns non-zero.
+    REX |= 0x40;
+    break;
+  }
+  
+  switch (TSFlags & X86II::FormMask) {
+  case X86II::MRMInitReg: assert(0 && "FIXME: Remove this!");
+  case X86II::MRMSrcReg:
+    if (MI.getOperand(0).isReg() &&
+        X86InstrInfo::isX86_64ExtendedReg(MI.getOperand(0).getReg()))
+      REX |= 1 << 2;
+    i = isTwoAddr ? 2 : 1;
+    for (; i != NumOps; ++i) {
+      const MCOperand &MO = MI.getOperand(i);
+      if (MO.isReg() && X86InstrInfo::isX86_64ExtendedReg(MO.getReg()))
+        REX |= 1 << 0;
+    }
+    break;
+  case X86II::MRMSrcMem: {
+    if (MI.getOperand(0).isReg() &&
+        X86InstrInfo::isX86_64ExtendedReg(MI.getOperand(0).getReg()))
+      REX |= 1 << 2;
+    unsigned Bit = 0;
+    i = isTwoAddr ? 2 : 1;
+    for (; i != NumOps; ++i) {
+      const MCOperand &MO = MI.getOperand(i);
+      if (MO.isReg()) {
+        if (X86InstrInfo::isX86_64ExtendedReg(MO.getReg()))
+          REX |= 1 << Bit;
+        Bit++;
+      }
+    }
+    break;
+  }
+  case X86II::MRM0m: case X86II::MRM1m:
+  case X86II::MRM2m: case X86II::MRM3m:
+  case X86II::MRM4m: case X86II::MRM5m:
+  case X86II::MRM6m: case X86II::MRM7m:
+  case X86II::MRMDestMem: {
+    unsigned e = (isTwoAddr ? X86AddrNumOperands+1 : X86AddrNumOperands);
+    i = isTwoAddr ? 1 : 0;
+    if (NumOps > e && MI.getOperand(e).isReg() &&
+        X86InstrInfo::isX86_64ExtendedReg(MI.getOperand(e).getReg()))
+      REX |= 1 << 2;
+    unsigned Bit = 0;
+    for (; i != e; ++i) {
+      const MCOperand &MO = MI.getOperand(i);
+      if (MO.isReg()) {
+        if (X86InstrInfo::isX86_64ExtendedReg(MO.getReg()))
+          REX |= 1 << Bit;
+        Bit++;
+      }
+    }
+    break;
+  }
+  default:
+    if (MI.getOperand(0).isReg() &&
+        X86InstrInfo::isX86_64ExtendedReg(MI.getOperand(0).getReg()))
+      REX |= 1 << 0;
+    i = isTwoAddr ? 2 : 1;
+    for (unsigned e = NumOps; i != e; ++i) {
+      const MCOperand &MO = MI.getOperand(i);
+      if (MO.isReg() && X86InstrInfo::isX86_64ExtendedReg(MO.getReg()))
+        REX |= 1 << 2;
+    }
+    break;
+  }
+  return REX;
+}
+
+void X86MCCodeEmitter::
+EncodeInstruction(const MCInst &MI, raw_ostream &OS,
+                  SmallVectorImpl<MCFixup> &Fixups) const {
+  unsigned Opcode = MI.getOpcode();
+  const TargetInstrDesc &Desc = TII.get(Opcode);
+  unsigned TSFlags = Desc.TSFlags;
+
+  // Keep track of the current byte being emitted.
+  unsigned CurByte = 0;
+  
+  // FIXME: We should emit the prefixes in exactly the same order as GAS does,
+  // in order to provide diffability.
+
+  // Emit the lock opcode prefix as needed.
+  if (TSFlags & X86II::LOCK)
+    EmitByte(0xF0, CurByte, OS);
+  
+  // Emit segment override opcode prefix as needed.
+  switch (TSFlags & X86II::SegOvrMask) {
+  default: assert(0 && "Invalid segment!");
+  case 0: break;  // No segment override!
+  case X86II::FS:
+    EmitByte(0x64, CurByte, OS);
+    break;
+  case X86II::GS:
+    EmitByte(0x65, CurByte, OS);
+    break;
+  }
+  
+  // Emit the repeat opcode prefix as needed.
+  if ((TSFlags & X86II::Op0Mask) == X86II::REP)
+    EmitByte(0xF3, CurByte, OS);
+  
+  // Emit the operand size opcode prefix as needed.
+  if (TSFlags & X86II::OpSize)
+    EmitByte(0x66, CurByte, OS);
+  
+  // Emit the address size opcode prefix as needed.
+  if (TSFlags & X86II::AdSize)
+    EmitByte(0x67, CurByte, OS);
+  
+  bool Need0FPrefix = false;
+  switch (TSFlags & X86II::Op0Mask) {
+  default: assert(0 && "Invalid prefix!");
+  case 0: break;  // No prefix!
+  case X86II::REP: break; // already handled.
+  case X86II::TB:  // Two-byte opcode prefix
+  case X86II::T8:  // 0F 38
+  case X86II::TA:  // 0F 3A
+    Need0FPrefix = true;
+    break;
+  case X86II::TF: // F2 0F 38
+    EmitByte(0xF2, CurByte, OS);
+    Need0FPrefix = true;
+    break;
+  case X86II::XS:   // F3 0F
+    EmitByte(0xF3, CurByte, OS);
+    Need0FPrefix = true;
+    break;
+  case X86II::XD:   // F2 0F
+    EmitByte(0xF2, CurByte, OS);
+    Need0FPrefix = true;
+    break;
+  case X86II::D8: EmitByte(0xD8, CurByte, OS); break;
+  case X86II::D9: EmitByte(0xD9, CurByte, OS); break;
+  case X86II::DA: EmitByte(0xDA, CurByte, OS); break;
+  case X86II::DB: EmitByte(0xDB, CurByte, OS); break;
+  case X86II::DC: EmitByte(0xDC, CurByte, OS); break;
+  case X86II::DD: EmitByte(0xDD, CurByte, OS); break;
+  case X86II::DE: EmitByte(0xDE, CurByte, OS); break;
+  case X86II::DF: EmitByte(0xDF, CurByte, OS); break;
+  }
+  
+  // Handle REX prefix.
+  // FIXME: Can this come before F2 etc to simplify emission?
+  if (Is64BitMode) {
+    if (unsigned REX = DetermineREXPrefix(MI, TSFlags, Desc))
+      EmitByte(0x40 | REX, CurByte, OS);
+  }
+  
+  // 0x0F escape code must be emitted just before the opcode.
+  if (Need0FPrefix)
+    EmitByte(0x0F, CurByte, OS);
+  
+  // FIXME: Pull this up into previous switch if REX can be moved earlier.
+  switch (TSFlags & X86II::Op0Mask) {
+  case X86II::TF:    // F2 0F 38
+  case X86II::T8:    // 0F 38
+    EmitByte(0x38, CurByte, OS);
+    break;
+  case X86II::TA:    // 0F 3A
+    EmitByte(0x3A, CurByte, OS);
+    break;
+  }
+  
+  // If this is a two-address instruction, skip one of the register operands.
+  unsigned NumOps = Desc.getNumOperands();
+  unsigned CurOp = 0;
+  if (NumOps > 1 && Desc.getOperandConstraint(1, TOI::TIED_TO) != -1)
+    ++CurOp;
+  else if (NumOps > 2 && Desc.getOperandConstraint(NumOps-1, TOI::TIED_TO)== 0)
+    // Skip the last source operand that is tied_to the dest reg. e.g. LXADD32
+    --NumOps;
+  
+  unsigned char BaseOpcode = X86II::getBaseOpcodeFor(TSFlags);
+  switch (TSFlags & X86II::FormMask) {
+  case X86II::MRMInitReg:
+    assert(0 && "FIXME: Remove this form when the JIT moves to MCCodeEmitter!");
+  default: errs() << "FORM: " << (TSFlags & X86II::FormMask) << "\n";
+    assert(0 && "Unknown FormMask value in X86MCCodeEmitter!");
+  case X86II::Pseudo: return; // Pseudo instructions encode to nothing.
+  case X86II::RawFrm:
+    EmitByte(BaseOpcode, CurByte, OS);
+    break;
+      
+  case X86II::AddRegFrm:
+    EmitByte(BaseOpcode + GetX86RegNum(MI.getOperand(CurOp++)), CurByte, OS);
+    break;
+      
+  case X86II::MRMDestReg:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitRegModRMByte(MI.getOperand(CurOp),
+                     GetX86RegNum(MI.getOperand(CurOp+1)), CurByte, OS);
+    CurOp += 2;
+    break;
+  
+  case X86II::MRMDestMem:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitMemModRMByte(MI, CurOp,
+                     GetX86RegNum(MI.getOperand(CurOp + X86AddrNumOperands)),
+                     TSFlags, CurByte, OS, Fixups);
+    CurOp += X86AddrNumOperands + 1;
+    break;
+      
+  case X86II::MRMSrcReg:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitRegModRMByte(MI.getOperand(CurOp+1), GetX86RegNum(MI.getOperand(CurOp)),
+                     CurByte, OS);
+    CurOp += 2;
+    break;
+    
+  case X86II::MRMSrcMem: {
+    EmitByte(BaseOpcode, CurByte, OS);
+
+    // FIXME: Maybe lea should have its own form?  This is a horrible hack.
+    int AddrOperands;
+    if (Opcode == X86::LEA64r || Opcode == X86::LEA64_32r ||
+        Opcode == X86::LEA16r || Opcode == X86::LEA32r)
+      AddrOperands = X86AddrNumOperands - 1; // No segment register
+    else
+      AddrOperands = X86AddrNumOperands;
+    
+    EmitMemModRMByte(MI, CurOp+1, GetX86RegNum(MI.getOperand(CurOp)),
+                     TSFlags, CurByte, OS, Fixups);
+    CurOp += AddrOperands + 1;
+    break;
+  }
+
+  case X86II::MRM0r: case X86II::MRM1r:
+  case X86II::MRM2r: case X86II::MRM3r:
+  case X86II::MRM4r: case X86II::MRM5r:
+  case X86II::MRM6r: case X86II::MRM7r:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitRegModRMByte(MI.getOperand(CurOp++),
+                     (TSFlags & X86II::FormMask)-X86II::MRM0r,
+                     CurByte, OS);
+    break;
+  case X86II::MRM0m: case X86II::MRM1m:
+  case X86II::MRM2m: case X86II::MRM3m:
+  case X86II::MRM4m: case X86II::MRM5m:
+  case X86II::MRM6m: case X86II::MRM7m:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitMemModRMByte(MI, CurOp, (TSFlags & X86II::FormMask)-X86II::MRM0m,
+                     TSFlags, CurByte, OS, Fixups);
+    CurOp += X86AddrNumOperands;
+    break;
+  case X86II::MRM_C1:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC1, CurByte, OS);
+    break;
+  case X86II::MRM_C2:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC2, CurByte, OS);
+    break;
+  case X86II::MRM_C3:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC3, CurByte, OS);
+    break;
+  case X86II::MRM_C4:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC4, CurByte, OS);
+    break;
+  case X86II::MRM_C8:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC8, CurByte, OS);
+    break;
+  case X86II::MRM_C9:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xC9, CurByte, OS);
+    break;
+  case X86II::MRM_E8:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xE8, CurByte, OS);
+    break;
+  case X86II::MRM_F0:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xF0, CurByte, OS);
+    break;
+  case X86II::MRM_F8:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xF8, CurByte, OS);
+    break;
+  case X86II::MRM_F9:
+    EmitByte(BaseOpcode, CurByte, OS);
+    EmitByte(0xF9, CurByte, OS);
+    break;
+  }
+  
+  // If there is a remaining operand, it must be a trailing immediate.  Emit it
+  // according to the right size for the instruction.
+  // FIXME: This should pass in whether the value is pc relative or not.  This
+  // information should be aquired from TSFlags as well.
+  if (CurOp != NumOps)
+    EmitImmediate(MI.getOperand(CurOp++),
+                  X86II::getSizeOfImm(TSFlags), getImmFixupKind(TSFlags),
+                  CurByte, OS, Fixups);
+  
+#ifndef NDEBUG
+  // FIXME: Verify.
+  if (/*!Desc.isVariadic() &&*/ CurOp != NumOps) {
+    errs() << "Cannot encode all operands of: ";
+    MI.dump();
+    errs() << '\n';
+    abort();
+  }
+#endif
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.cpp b/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.cpp
new file mode 100644
index 0000000..17b4fe8
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.cpp
@@ -0,0 +1,48 @@
+//===- X86MCTargetExpr.cpp - X86 Target Specific MCExpr Implementation ----===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "X86MCTargetExpr.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCSymbol.h"
+#include "llvm/MC/MCValue.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+X86MCTargetExpr *X86MCTargetExpr::Create(const MCSymbol *Sym, VariantKind K,
+                                         MCContext &Ctx) {
+  return new (Ctx) X86MCTargetExpr(Sym, K);
+}
+
+void X86MCTargetExpr::PrintImpl(raw_ostream &OS) const {
+  OS << *Sym;
+  
+  switch (Kind) {
+  case Invalid:   OS << "@<invalid>"; break;
+  case GOT:       OS << "@GOT"; break;
+  case GOTOFF:    OS << "@GOTOFF"; break;
+  case GOTPCREL:  OS << "@GOTPCREL"; break;
+  case GOTTPOFF:  OS << "@GOTTPOFF"; break;
+  case INDNTPOFF: OS << "@INDNTPOFF"; break;
+  case NTPOFF:    OS << "@NTPOFF"; break;
+  case PLT:       OS << "@PLT"; break;
+  case TLSGD:     OS << "@TLSGD"; break;
+  case TPOFF:     OS << "@TPOFF"; break;
+  }
+}
+
+bool X86MCTargetExpr::EvaluateAsRelocatableImpl(MCValue &Res) const {
+  // FIXME: I don't know if this is right, it followed MCSymbolRefExpr.
+  
+  // Evaluate recursively if this is a variable.
+  if (Sym->isVariable())
+    return Sym->getValue()->EvaluateAsRelocatable(Res);
+  
+  Res = MCValue::get(Sym, 0, 0);
+  return true;
+}
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.h b/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.h
new file mode 100644
index 0000000..7de8a5c
--- /dev/null
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MCTargetExpr.h
@@ -0,0 +1,49 @@
+//===- X86MCTargetExpr.h - X86 Target Specific MCExpr -----------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef X86_MCTARGETEXPR_H
+#define X86_MCTARGETEXPR_H
+
+#include "llvm/MC/MCExpr.h"
+
+namespace llvm {
+
+/// X86MCTargetExpr - This class represents symbol variants, like foo at GOT.
+class X86MCTargetExpr : public MCTargetExpr {
+public:
+  enum VariantKind {
+    Invalid,
+    GOT,
+    GOTOFF,
+    GOTPCREL,
+    GOTTPOFF,
+    INDNTPOFF,
+    NTPOFF,
+    PLT,
+    TLSGD,
+    TPOFF
+  };
+private:
+  /// Sym - The symbol being referenced.
+  const MCSymbol * const Sym;
+  /// Kind - The modifier.
+  const VariantKind Kind;
+  
+  X86MCTargetExpr(const MCSymbol *S, VariantKind K) : Sym(S), Kind(K) {}
+public:
+  static X86MCTargetExpr *Create(const MCSymbol *Sym, VariantKind K,
+                                 MCContext &Ctx);
+  
+  void PrintImpl(raw_ostream &OS) const;
+  bool EvaluateAsRelocatableImpl(MCValue &Res) const;
+};
+  
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86MachineFunctionInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86MachineFunctionInfo.h
index fafcf7e..4b2529b 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86MachineFunctionInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86MachineFunctionInfo.h
@@ -18,12 +18,6 @@
 
 namespace llvm {
 
-enum NameDecorationStyle {
-  None,
-  StdCall,
-  FastCall
-};
-  
 /// X86MachineFunctionInfo - This class is derived from MachineFunction and
 /// contains private X86 target-specific information for each MachineFunction.
 class X86MachineFunctionInfo : public MachineFunctionInfo {
@@ -41,16 +35,11 @@ class X86MachineFunctionInfo : public MachineFunctionInfo {
   /// Used on windows platform for stdcall & fastcall name decoration
   unsigned BytesToPopOnReturn;
 
-  /// DecorationStyle - If the function requires additional name decoration,
-  /// DecorationStyle holds the right way to do so.
-  NameDecorationStyle DecorationStyle;
-
   /// ReturnAddrIndex - FrameIndex for return slot.
   int ReturnAddrIndex;
 
-  /// TailCallReturnAddrDelta - Delta the ReturnAddr stack slot is moved
-  /// Used for creating an area before the register spill area on the stack
-  /// the returnaddr can be savely move to this area
+  /// TailCallReturnAddrDelta - The number of bytes by which return address
+  /// stack slot is moved as the result of tail call optimization.
   int TailCallReturnAddrDelta;
 
   /// SRetReturnReg - Some subtargets require that sret lowering includes
@@ -67,7 +56,6 @@ public:
   X86MachineFunctionInfo() : ForceFramePointer(false),
                              CalleeSavedFrameSize(0),
                              BytesToPopOnReturn(0),
-                             DecorationStyle(None),
                              ReturnAddrIndex(0),
                              TailCallReturnAddrDelta(0),
                              SRetReturnReg(0),
@@ -77,7 +65,6 @@ public:
     : ForceFramePointer(false),
       CalleeSavedFrameSize(0),
       BytesToPopOnReturn(0),
-      DecorationStyle(None),
       ReturnAddrIndex(0),
       TailCallReturnAddrDelta(0),
       SRetReturnReg(0),
@@ -92,9 +79,6 @@ public:
   unsigned getBytesToPopOnReturn() const { return BytesToPopOnReturn; }
   void setBytesToPopOnReturn (unsigned bytes) { BytesToPopOnReturn = bytes;}
 
-  NameDecorationStyle getDecorationStyle() const { return DecorationStyle; }
-  void setDecorationStyle(NameDecorationStyle style) { DecorationStyle = style;}
-
   int getRAIndex() const { return ReturnAddrIndex; }
   void setRAIndex(int Index) { ReturnAddrIndex = Index; }
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
index f959a2d..8524236 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.cpp
@@ -473,9 +473,9 @@ bool X86RegisterInfo::hasReservedSpillSlot(MachineFunction &MF, unsigned Reg,
 }
 
 int
-X86RegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
+X86RegisterInfo::getFrameIndexOffset(const MachineFunction &MF, int FI) const {
   const TargetFrameInfo &TFI = *MF.getTarget().getFrameInfo();
-  MachineFrameInfo *MFI = MF.getFrameInfo();
+  const MachineFrameInfo *MFI = MF.getFrameInfo();
   int Offset = MFI->getObjectOffset(FI) - TFI.getOffsetOfLocalArea();
   uint64_t StackSize = MFI->getStackSize();
 
@@ -485,7 +485,7 @@ X86RegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
       Offset += SlotSize;
     } else {
       unsigned Align = MFI->getObjectAlignment(FI);
-      assert( (-(Offset + StackSize)) % Align == 0);
+      assert((-(Offset + StackSize)) % Align == 0);
       Align = 0;
       return Offset + StackSize;
     }
@@ -498,7 +498,7 @@ X86RegisterInfo::getFrameIndexOffset(MachineFunction &MF, int FI) const {
     Offset += SlotSize;
 
     // Skip the RETADDR move area
-    X86MachineFunctionInfo *X86FI = MF.getInfo<X86MachineFunctionInfo>();
+    const X86MachineFunctionInfo *X86FI = MF.getInfo<X86MachineFunctionInfo>();
     int TailCallReturnAddrDelta = X86FI->getTCReturnAddrDelta();
     if (TailCallReturnAddrDelta < 0)
       Offset -= TailCallReturnAddrDelta;
@@ -627,10 +627,6 @@ X86RegisterInfo::processFunctionBeforeCalleeSavedScan(MachineFunction &MF,
                                                       RegScavenger *RS) const {
   MachineFrameInfo *MFI = MF.getFrameInfo();
 
-  // Calculate and set max stack object alignment early, so we can decide
-  // whether we will need stack realignment (and thus FP).
-  MFI->calculateMaxStackAlignment();
-
   X86MachineFunctionInfo *X86FI = MF.getInfo<X86MachineFunctionInfo>();
   int32_t TailCallReturnAddrDelta = X86FI->getTCReturnAddrDelta();
 
@@ -1242,13 +1238,19 @@ void X86RegisterInfo::emitEpilogue(MachineFunction &MF,
     }
 
     // Jump to label or value in register.
-    if (RetOpcode == X86::TCRETURNdi|| RetOpcode == X86::TCRETURNdi64)
+    if (RetOpcode == X86::TCRETURNdi|| RetOpcode == X86::TCRETURNdi64) {
       BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPd)).
-        addGlobalAddress(JumpTarget.getGlobal(), JumpTarget.getOffset());
-    else if (RetOpcode== X86::TCRETURNri64)
+        addGlobalAddress(JumpTarget.getGlobal(), JumpTarget.getOffset(),
+                         JumpTarget.getTargetFlags());
+    } else if (RetOpcode == X86::TCRETURNri64) {
       BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPr64), JumpTarget.getReg());
-    else
+    } else {
       BuildMI(MBB, MBBI, DL, TII.get(X86::TAILJMPr), JumpTarget.getReg());
+    }
+
+    MachineInstr *NewMI = prior(MBBI);
+    for (unsigned i = 2, e = MBBI->getNumOperands(); i != e; ++i)
+      NewMI->addOperand(MBBI->getOperand(i));
 
     // Delete the pseudo instruction TCRETURN.
     MBB.erase(MBBI);
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
index dec3fba..8fb5e92 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.h
@@ -156,7 +156,7 @@ public:
   // Debug information queries.
   unsigned getRARegister() const;
   unsigned getFrameRegister(const MachineFunction &MF) const;
-  int getFrameIndexOffset(MachineFunction &MF, int FI) const;
+  int getFrameIndexOffset(const MachineFunction &MF, int FI) const;
   void getInitialFrameState(std::vector<MachineMove> &Moves) const;
 
   // Exception handling queries.
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
index 6db0cc3..1559bf7 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
+++ b/libclamav/c++/llvm/lib/Target/X86/X86RegisterInfo.td
@@ -512,20 +512,17 @@ def GR64_ABCD : RegisterClass<"X86", [i64], 64, [RAX, RCX, RDX, RBX]> {
   let SubRegClassList = [GR8_ABCD_L, GR8_ABCD_H, GR16_ABCD, GR32_ABCD];
 }
 
-// GR8_NOREX, GR16_NOREX, GR32_NOREX, GR64_NOREX - Subclasses of
-// GR8, GR16, GR32, and GR64 which contain only the first 8 GPRs.
-// On x86-64, GR64_NOREX, GR32_NOREX and GR16_NOREX are the classes
-// of registers which do not by themselves require a REX prefix.
+// GR8_NOREX - GR8 registers which do not require a REX prefix.
 def GR8_NOREX : RegisterClass<"X86", [i8], 8,
-                              [AL, CL, DL, AH, CH, DH, BL, BH,
-                               SIL, DIL, BPL, SPL]> {
+                              [AL, CL, DL, AH, CH, DH, BL, BH]> {
   let MethodProtos = [{
     iterator allocation_order_begin(const MachineFunction &MF) const;
     iterator allocation_order_end(const MachineFunction &MF) const;
   }];
   let MethodBodies = [{
+    // In 64-bit mode, it's not safe to blindly allocate H registers.
     static const unsigned X86_GR8_NOREX_AO_64[] = {
-      X86::AL, X86::CL, X86::DL, X86::SIL, X86::DIL, X86::BL, X86::BPL
+      X86::AL, X86::CL, X86::DL, X86::BL
     };
 
     GR8_NOREXClass::iterator
@@ -541,21 +538,15 @@ def GR8_NOREX : RegisterClass<"X86", [i8], 8,
     GR8_NOREXClass::iterator
     GR8_NOREXClass::allocation_order_end(const MachineFunction &MF) const {
       const TargetMachine &TM = MF.getTarget();
-      const TargetRegisterInfo *RI = TM.getRegisterInfo();
       const X86Subtarget &Subtarget = TM.getSubtarget<X86Subtarget>();
-      // Does the function dedicate RBP / EBP to being a frame ptr?
-      if (!Subtarget.is64Bit())
-        // In 32-mode, none of the 8-bit registers aliases EBP or ESP.
-        return begin() + 8;
-      else if (RI->hasFP(MF))
-        // If so, don't allocate SPL or BPL.
-        return array_endof(X86_GR8_NOREX_AO_64) - 1;
-      else
-        // If not, just don't allocate SPL.
+      if (Subtarget.is64Bit())
         return array_endof(X86_GR8_NOREX_AO_64);
+      else
+        return end();
     }
   }];
 }
+// GR16_NOREX - GR16 registers which do not require a REX prefix.
 def GR16_NOREX : RegisterClass<"X86", [i16], 16,
                                [AX, CX, DX, SI, DI, BX, BP, SP]> {
   let SubRegClassList = [GR8_NOREX, GR8_NOREX];
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
index 2039be7..adef5bc 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.cpp
@@ -53,9 +53,9 @@ ClassifyGlobalReference(const GlobalValue *GV, const TargetMachine &TM) const {
   if (GV->hasDLLImportLinkage())
     return X86II::MO_DLLIMPORT;
 
-  // GV with ghost linkage (in JIT lazy compilation mode) do not require an
+  // Materializable GVs (in JIT lazy compilation mode) do not require an
   // extra load from stub.
-  bool isDecl = GV->isDeclaration() && !GV->hasNotBeenReadFromBitcode();
+  bool isDecl = GV->isDeclaration() && !GV->isMaterializable();
 
   // X86-64 in PIC mode.
   if (isPICStyleRIPRel()) {
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
index 618dd10..5e05c2f 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86Subtarget.h
@@ -175,7 +175,7 @@ public:
     else if (isTargetDarwin())
       p = "e-p:32:32-f64:32:64-i64:32:64-f80:128:128-n8:16:32";
     else if (isTargetMingw() || isTargetWindows())
-      p = "e-p:32:32-f64:64:64-i64:64:64-f80:128:128-n8:16:32";
+      p = "e-p:32:32-f64:64:64-i64:64:64-f80:32:32-n8:16:32";
     else
       p = "e-p:32:32-f64:32:64-i64:32:64-f80:32:32-n8:16:32";
 
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
index 731c3ab..b507871 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.cpp
@@ -30,9 +30,8 @@ static const MCAsmInfo *createMCAsmInfo(const Target &T, StringRef TT) {
   case Triple::MinGW32:
   case Triple::MinGW64:
   case Triple::Cygwin:
-    return new X86MCAsmInfoCOFF(TheTriple);
   case Triple::Win32:
-    return new X86WinMCAsmInfo(TheTriple);
+    return new X86MCAsmInfoCOFF(TheTriple);
   default:
     return new X86ELFMCAsmInfo(TheTriple);
   }
@@ -48,8 +47,10 @@ extern "C" void LLVMInitializeX86Target() {
   RegisterAsmInfoFn B(TheX86_64Target, createMCAsmInfo);
 
   // Register the code emitter.
-  TargetRegistry::RegisterCodeEmitter(TheX86_32Target, createX86MCCodeEmitter);
-  TargetRegistry::RegisterCodeEmitter(TheX86_64Target, createX86MCCodeEmitter);
+  TargetRegistry::RegisterCodeEmitter(TheX86_32Target,
+                                      createX86_32MCCodeEmitter);
+  TargetRegistry::RegisterCodeEmitter(TheX86_64Target,
+                                      createX86_64MCCodeEmitter);
 }
 
 
@@ -145,10 +146,6 @@ bool X86TargetMachine::addInstSelector(PassManagerBase &PM,
   // Install an instruction selector.
   PM.add(createX86ISelDag(*this, OptLevel));
 
-  // If we're using Fast-ISel, clean up the mess.
-  if (EnableFastISel)
-    PM.add(createDeadMachineInstructionElimPass());
-
   // Install a pass to insert x87 FP_REG_KILL instructions, as needed.
   PM.add(createX87FPRegKillInserterPass());
 
@@ -168,22 +165,6 @@ bool X86TargetMachine::addPostRegAlloc(PassManagerBase &PM,
 
 bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
                                       CodeGenOpt::Level OptLevel,
-                                      MachineCodeEmitter &MCE) {
-  // FIXME: Move this to TargetJITInfo!
-  // On Darwin, do not override 64-bit setting made in X86TargetMachine().
-  if (DefRelocModel == Reloc::Default && 
-      (!Subtarget.isTargetDarwin() || !Subtarget.is64Bit())) {
-    setRelocationModel(Reloc::Static);
-    Subtarget.setPICStyle(PICStyles::None);
-  }
-  
-  PM.add(createX86CodeEmitterPass(*this, MCE));
-
-  return false;
-}
-
-bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                      CodeGenOpt::Level OptLevel,
                                       JITCodeEmitter &JCE) {
   // FIXME: Move this to TargetJITInfo!
   // On Darwin, do not override 64-bit setting made in X86TargetMachine().
@@ -199,34 +180,6 @@ bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
   return false;
 }
 
-bool X86TargetMachine::addCodeEmitter(PassManagerBase &PM,
-                                      CodeGenOpt::Level OptLevel,
-                                      ObjectCodeEmitter &OCE) {
-  PM.add(createX86ObjectCodeEmitterPass(*this, OCE));
-  return false;
-}
-
-bool X86TargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            MachineCodeEmitter &MCE) {
-  PM.add(createX86CodeEmitterPass(*this, MCE));
-  return false;
-}
-
-bool X86TargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            JITCodeEmitter &JCE) {
-  PM.add(createX86JITCodeEmitterPass(*this, JCE));
-  return false;
-}
-
-bool X86TargetMachine::addSimpleCodeEmitter(PassManagerBase &PM,
-                                            CodeGenOpt::Level OptLevel,
-                                            ObjectCodeEmitter &OCE) {
-  PM.add(createX86ObjectCodeEmitterPass(*this, OCE));
-  return false;
-}
-
 void X86TargetMachine::setCodeModelForStatic() {
 
     if (getCodeModel() != CodeModel::Default) return;
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
index d05bebd..eee29be 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetMachine.h
@@ -79,20 +79,7 @@ public:
   virtual bool addPreRegAlloc(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addPostRegAlloc(PassManagerBase &PM, CodeGenOpt::Level OptLevel);
   virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              MachineCodeEmitter &MCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
                               JITCodeEmitter &JCE);
-  virtual bool addCodeEmitter(PassManagerBase &PM, CodeGenOpt::Level OptLevel,
-                              ObjectCodeEmitter &OCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    MachineCodeEmitter &MCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    JITCodeEmitter &JCE);
-  virtual bool addSimpleCodeEmitter(PassManagerBase &PM,
-                                    CodeGenOpt::Level OptLevel,
-                                    ObjectCodeEmitter &OCE);
 };
 
 /// X86_32TargetMachine - X86 32-bit target machine.
diff --git a/libclamav/c++/llvm/lib/Target/X86/X86TargetObjectFile.cpp b/libclamav/c++/llvm/lib/Target/X86/X86TargetObjectFile.cpp
index 41ad153..b8cef7d 100644
--- a/libclamav/c++/llvm/lib/Target/X86/X86TargetObjectFile.cpp
+++ b/libclamav/c++/llvm/lib/Target/X86/X86TargetObjectFile.cpp
@@ -8,9 +8,9 @@
 //===----------------------------------------------------------------------===//
 
 #include "X86TargetObjectFile.h"
+#include "X86MCTargetExpr.h"
 #include "llvm/CodeGen/MachineModuleInfoImpls.h"
 #include "llvm/MC/MCContext.h"
-#include "llvm/MC/MCExpr.h"
 #include "llvm/Target/Mangler.h"
 #include "llvm/ADT/SmallString.h"
 using namespace llvm;
@@ -35,7 +35,7 @@ getSymbolForDwarfGlobalReference(const GlobalValue *GV, Mangler *Mang,
   // Add information about the stub reference to MachOMMI so that the stub gets
   // emitted by the asmprinter.
   MCSymbol *Sym = getContext().GetOrCreateSymbol(Name.str());
-  const MCSymbol *&StubSym = MachOMMI.getGVStubEntry(Sym);
+  MCSymbol *&StubSym = MachOMMI.getGVStubEntry(Sym);
   if (StubSym == 0) {
     Name.clear();
     Mang->getNameWithPrefix(Name, GV, false);
@@ -55,11 +55,12 @@ getSymbolForDwarfGlobalReference(const GlobalValue *GV, Mangler *Mang,
   IsIndirect = true;
   IsPCRel    = true;
   
+  // FIXME: Use GetSymbolWithGlobalValueBase.
   SmallString<128> Name;
   Mang->getNameWithPrefix(Name, GV, false);
-  Name += "@GOTPCREL";
+  const MCSymbol *Sym = getContext().CreateSymbol(Name);
   const MCExpr *Res =
-    MCSymbolRefExpr::Create(Name.str(), getContext());
+    X86MCTargetExpr::Create(Sym, X86MCTargetExpr::GOTPCREL, getContext());
   const MCExpr *Four = MCConstantExpr::Create(4, getContext());
   return MCBinaryExpr::CreateAdd(Res, Four, getContext());
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
index d8190a4..325d353 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/ArgumentPromotion.cpp
@@ -247,7 +247,7 @@ static bool PrefixIn(const ArgPromotion::IndicesVector &Indices,
     return Low != Set.end() && IsPrefix(*Low, Indices);
 }
 
-/// Mark the given indices (ToMark) as safe in the the given set of indices
+/// Mark the given indices (ToMark) as safe in the given set of indices
 /// (Safe). Marking safe usually means adding ToMark to Safe. However, if there
 /// is already a prefix of Indices in Safe, Indices are implicitely marked safe
 /// already. Furthermore, any indices that Indices is itself a prefix of, are
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
index 4972687..3c05f88 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/ConstantMerge.cpp
@@ -19,10 +19,11 @@
 
 #define DEBUG_TYPE "constmerge"
 #include "llvm/Transforms/IPO.h"
+#include "llvm/DerivedTypes.h"
 #include "llvm/Module.h"
 #include "llvm/Pass.h"
+#include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/Statistic.h"
-#include <map>
 using namespace llvm;
 
 STATISTIC(NumMerged, "Number of global constants merged");
@@ -48,10 +49,10 @@ ModulePass *llvm::createConstantMergePass() { return new ConstantMerge(); }
 bool ConstantMerge::runOnModule(Module &M) {
   // Map unique constant/section pairs to globals.  We don't want to merge
   // globals in different sections.
-  std::map<std::pair<Constant*, std::string>, GlobalVariable*> CMap;
+  DenseMap<Constant*, GlobalVariable*> CMap;
 
   // Replacements - This vector contains a list of replacements to perform.
-  std::vector<std::pair<GlobalVariable*, GlobalVariable*> > Replacements;
+  SmallVector<std::pair<GlobalVariable*, GlobalVariable*>, 32> Replacements;
 
   bool MadeChange = false;
 
@@ -76,19 +77,21 @@ bool ConstantMerge::runOnModule(Module &M) {
         continue;
       }
       
-      // Only process constants with initializers.
-      if (GV->isConstant() && GV->hasDefinitiveInitializer()) {
-        Constant *Init = GV->getInitializer();
-
-        // Check to see if the initializer is already known.
-        GlobalVariable *&Slot = CMap[std::make_pair(Init, GV->getSection())];
-
-        if (Slot == 0) {    // Nope, add it to the map.
-          Slot = GV;
-        } else if (GV->hasLocalLinkage()) {    // Yup, this is a duplicate!
-          // Make all uses of the duplicate constant use the canonical version.
-          Replacements.push_back(std::make_pair(GV, Slot));
-        }
+      // Only process constants with initializers in the default addres space.
+      if (!GV->isConstant() ||!GV->hasDefinitiveInitializer() ||
+          GV->getType()->getAddressSpace() != 0 || !GV->getSection().empty())
+        continue;
+      
+      Constant *Init = GV->getInitializer();
+
+      // Check to see if the initializer is already known.
+      GlobalVariable *&Slot = CMap[Init];
+
+      if (Slot == 0) {    // Nope, add it to the map.
+        Slot = GV;
+      } else if (GV->hasLocalLinkage()) {    // Yup, this is a duplicate!
+        // Make all uses of the duplicate constant use the canonical version.
+        Replacements.push_back(std::make_pair(GV, Slot));
       }
     }
 
@@ -100,11 +103,11 @@ bool ConstantMerge::runOnModule(Module &M) {
     // now.  This avoid invalidating the pointers in CMap, which are unneeded
     // now.
     for (unsigned i = 0, e = Replacements.size(); i != e; ++i) {
-      // Eliminate any uses of the dead global...
+      // Eliminate any uses of the dead global.
       Replacements[i].first->replaceAllUsesWith(Replacements[i].second);
 
-      // Delete the global value from the module...
-      M.getGlobalList().erase(Replacements[i].first);
+      // Delete the global value from the module.
+      Replacements[i].first->eraseFromParent();
     }
 
     NumMerged += Replacements.size();
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
index ee260e9..ac91631 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/GlobalOpt.cpp
@@ -638,8 +638,8 @@ static bool AllUsesOfValueWillTrapIfNull(Value *V,
     } else if (PHINode *PN = dyn_cast<PHINode>(*UI)) {
       // If we've already seen this phi node, ignore it, it has already been
       // checked.
-      if (PHIs.insert(PN))
-        return AllUsesOfValueWillTrapIfNull(PN, PHIs);
+      if (PHIs.insert(PN) && !AllUsesOfValueWillTrapIfNull(PN, PHIs))
+        return false;
     } else if (isa<ICmpInst>(*UI) &&
                isa<ConstantPointerNull>(UI->getOperand(1))) {
       // Ignore setcc X, null
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
index 0990278..752a97c 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/Inliner.cpp
@@ -38,8 +38,15 @@ STATISTIC(NumDeleted, "Number of functions deleted because all callers found");
 STATISTIC(NumMergedAllocas, "Number of allocas merged together");
 
 static cl::opt<int>
-InlineLimit("inline-threshold", cl::Hidden, cl::init(200), cl::ZeroOrMore,
-        cl::desc("Control the amount of inlining to perform (default = 200)"));
+InlineLimit("inline-threshold", cl::Hidden, cl::init(225), cl::ZeroOrMore,
+        cl::desc("Control the amount of inlining to perform (default = 225)"));
+
+static cl::opt<int>
+HintThreshold("inlinehint-threshold", cl::Hidden, cl::init(325),
+              cl::desc("Threshold for inlining functions with inline hint"));
+
+// Threshold to use when optsize is specified (and there is no -inline-limit).
+const int OptSizeThreshold = 75;
 
 Inliner::Inliner(void *ID) 
   : CallGraphSCCPass(ID), InlineThreshold(InlineLimit) {}
@@ -172,13 +179,23 @@ static bool InlineCallIfPossible(CallSite CS, CallGraph &CG,
   return true;
 }
 
-unsigned Inliner::getInlineThreshold(Function* Caller) const {
+unsigned Inliner::getInlineThreshold(CallSite CS) const {
+  int thres = InlineThreshold;
+
+  // Listen to optsize when -inline-limit is not given.
+  Function *Caller = CS.getCaller();
   if (Caller && !Caller->isDeclaration() &&
       Caller->hasFnAttr(Attribute::OptimizeForSize) &&
       InlineLimit.getNumOccurrences() == 0)
-    return 50;
-  else
-    return InlineThreshold;
+    thres = OptSizeThreshold;
+
+  // Listen to inlinehint when it would increase the threshold.
+  Function *Callee = CS.getCalledFunction();
+  if (HintThreshold > thres && Callee && !Callee->isDeclaration() &&
+      Callee->hasFnAttr(Attribute::InlineHint))
+    thres = HintThreshold;
+
+  return thres;
 }
 
 /// shouldInline - Return true if the inliner should attempt to inline
@@ -200,7 +217,7 @@ bool Inliner::shouldInline(CallSite CS) {
   
   int Cost = IC.getValue();
   Function *Caller = CS.getCaller();
-  int CurrentThreshold = getInlineThreshold(Caller);
+  int CurrentThreshold = getInlineThreshold(CS);
   float FudgeFactor = getInlineFudgeFactor(CS);
   if (Cost >= (int)(CurrentThreshold * FudgeFactor)) {
     DEBUG(dbgs() << "    NOT Inlining: cost=" << Cost
@@ -236,8 +253,7 @@ bool Inliner::shouldInline(CallSite CS) {
 
       outerCallsFound = true;
       int Cost2 = IC2.getValue();
-      Function *Caller2 = CS2.getCaller();
-      int CurrentThreshold2 = getInlineThreshold(Caller2);
+      int CurrentThreshold2 = getInlineThreshold(CS2);
       float FudgeFactor2 = getInlineFudgeFactor(CS2);
 
       if (Cost2 >= (int)(CurrentThreshold2 * FudgeFactor2))
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
index fa8845b..b07e22c 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/MergeFunctions.cpp
@@ -467,7 +467,6 @@ static LinkageCategory categorize(const Function *F) {
   case GlobalValue::AppendingLinkage:
   case GlobalValue::DLLImportLinkage:
   case GlobalValue::DLLExportLinkage:
-  case GlobalValue::GhostLinkage:
   case GlobalValue::CommonLinkage:
     return ExternalStrong;
   }
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
index f40902f..f8ec722 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/PartialInlining.cpp
@@ -117,7 +117,7 @@ Function* PartialInliner::unswitchFunction(Function* F) {
   DominatorTree DT;
   DT.runOnFunction(*duplicateFunction);
   
-  // Extract the body of the the if.
+  // Extract the body of the if.
   Function* extractedFunction = ExtractCodeRegion(DT, toExtract);
   
   // Inline the top-level if test into all callers.
diff --git a/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp b/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
index 0e0d83a..310e4a2 100644
--- a/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/IPO/StripSymbols.cpp
@@ -214,6 +214,15 @@ static bool StripDebugInfo(Module &M) {
     Changed = true;
   }
 
+  if (Function *DbgVal = M.getFunction("llvm.dbg.value")) {
+    while (!DbgVal->use_empty()) {
+      CallInst *CI = cast<CallInst>(DbgVal->use_back());
+      CI->eraseFromParent();
+    }
+    DbgVal->eraseFromParent();
+    Changed = true;
+  }
+
   NamedMDNode *NMD = M.getNamedMetadata("llvm.dbg.gv");
   if (NMD) {
     Changed = true;
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombine.h b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombine.h
index 5367900..09accb6 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombine.h
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombine.h
@@ -199,11 +199,12 @@ private:
                                   SmallVectorImpl<Value*> &NewIndices);
   Instruction *FoldOpIntoSelect(Instruction &Op, SelectInst *SI);
                                  
-  /// ValueRequiresCast - Return true if the cast from "V to Ty" actually
-  /// results in any code being generated.  It does not require codegen if V is
-  /// simple enough or if the cast can be folded into other casts.
-  bool ValueRequiresCast(Instruction::CastOps opcode,const Value *V,
-                         const Type *Ty);
+  /// ShouldOptimizeCast - Return true if the cast from "V to Ty" actually
+  /// results in any code being generated and is interesting to optimize out. If
+  /// the cast can be eliminated by some other simple transformation, we prefer
+  /// to do the simplification first.
+  bool ShouldOptimizeCast(Instruction::CastOps opcode,const Value *V,
+                          const Type *Ty);
 
   Instruction *visitCallSite(CallSite CS);
   bool transformConstExprCastCall(CallSite CS);
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
index 4891ff0..c2924ab 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp
@@ -121,42 +121,26 @@ Instruction *InstCombiner::visitAdd(BinaryOperator &I) {
         match(LHS, m_Xor(m_Value(XorLHS), m_ConstantInt(XorRHS)))) {
       uint32_t TySizeBits = I.getType()->getScalarSizeInBits();
       const APInt& RHSVal = cast<ConstantInt>(RHSC)->getValue();
-      
-      uint32_t Size = TySizeBits / 2;
-      APInt C0080Val(APInt(TySizeBits, 1ULL).shl(Size - 1));
-      APInt CFF80Val(-C0080Val);
-      do {
-        if (TySizeBits > Size) {
-          // If we have ADD(XOR(AND(X, 0xFF), 0x80), 0xF..F80), it's a sext.
-          // If we have ADD(XOR(AND(X, 0xFF), 0xF..F80), 0x80), it's a sext.
-          if ((RHSVal == CFF80Val && XorRHS->getValue() == C0080Val) ||
-              (RHSVal == C0080Val && XorRHS->getValue() == CFF80Val)) {
-            // This is a sign extend if the top bits are known zero.
-            if (!MaskedValueIsZero(XorLHS, 
-                   APInt::getHighBitsSet(TySizeBits, TySizeBits - Size)))
-              Size = 0;  // Not a sign ext, but can't be any others either.
-            break;
-          }
-        }
-        Size >>= 1;
-        C0080Val = APIntOps::lshr(C0080Val, Size);
-        CFF80Val = APIntOps::ashr(CFF80Val, Size);
-      } while (Size >= 1);
-      
-      // FIXME: This shouldn't be necessary. When the backends can handle types
-      // with funny bit widths then this switch statement should be removed. It
-      // is just here to get the size of the "middle" type back up to something
-      // that the back ends can handle.
-      const Type *MiddleType = 0;
-      switch (Size) {
-        default: break;
-        case 32:
-        case 16:
-        case  8: MiddleType = IntegerType::get(I.getContext(), Size); break;
+      unsigned ExtendAmt = 0;
+      // If we have ADD(XOR(AND(X, 0xFF), 0x80), 0xF..F80), it's a sext.
+      // If we have ADD(XOR(AND(X, 0xFF), 0xF..F80), 0x80), it's a sext.
+      if (XorRHS->getValue() == -RHSVal) {
+        if (RHSVal.isPowerOf2())
+          ExtendAmt = TySizeBits - RHSVal.logBase2() - 1;
+        else if (XorRHS->getValue().isPowerOf2())
+          ExtendAmt = TySizeBits - XorRHS->getValue().logBase2() - 1;
+      }
+
+      if (ExtendAmt) {
+        APInt Mask = APInt::getHighBitsSet(TySizeBits, ExtendAmt);
+        if (!MaskedValueIsZero(XorLHS, Mask))
+          ExtendAmt = 0;
       }
-      if (MiddleType) {
-        Value *NewTrunc = Builder->CreateTrunc(XorLHS, MiddleType, "sext");
-        return new SExtInst(NewTrunc, I.getType(), I.getName());
+
+      if (ExtendAmt) {
+        Constant *ShAmt = ConstantInt::get(I.getType(), ExtendAmt);
+        Value *NewShl = Builder->CreateShl(XorLHS, ShAmt, "sext");
+        return BinaryOperator::CreateAShr(NewShl, ShAmt);
       }
     }
   }
@@ -676,6 +660,13 @@ Instruction *InstCombiner::visitSub(BinaryOperator &I) {
               return BinaryOperator::CreateSDiv(Op1I->getOperand(0),
                                           ConstantExpr::getNeg(DivRHS));
 
+      // 0 - (C << X)  -> (-C << X)
+      if (Op1I->getOpcode() == Instruction::Shl)
+        if (ConstantInt *CSI = dyn_cast<ConstantInt>(Op0))
+          if (CSI->isZero())
+            if (Value *ShlLHSNeg = dyn_castNegVal(Op1I->getOperand(0)))
+              return BinaryOperator::CreateShl(ShlLHSNeg, Op1I->getOperand(1));
+
       // X - X*C --> X * (1-C)
       ConstantInt *C2 = 0;
       if (dyn_castFoldableMul(Op1I, C2) == Op0) {
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 806e7b5..515753f 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -546,7 +546,7 @@ Instruction *InstCombiner::FoldAndOfICmps(Instruction &I,
     std::swap(LHSCC, RHSCC);
   }
 
-  // At this point, we know we have have two icmp instructions
+  // At this point, we know we have two icmp instructions
   // comparing a value against two constants and and'ing the result
   // together.  Because of the above check, we know that we only have
   // icmp eq, icmp ne, icmp [su]lt, and icmp [SU]gt here. We also know 
@@ -932,24 +932,49 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
     if (ICmpInst *LHS = dyn_cast<ICmpInst>(Op0))
       if (Instruction *Res = FoldAndOfICmps(I, LHS, RHS))
         return Res;
-
+  
+  // If and'ing two fcmp, try combine them into one.
+  if (FCmpInst *LHS = dyn_cast<FCmpInst>(I.getOperand(0)))
+    if (FCmpInst *RHS = dyn_cast<FCmpInst>(I.getOperand(1)))
+      if (Instruction *Res = FoldAndOfFCmps(I, LHS, RHS))
+        return Res;
+  
+  
   // fold (and (cast A), (cast B)) -> (cast (and A, B))
   if (CastInst *Op0C = dyn_cast<CastInst>(Op0))
-    if (CastInst *Op1C = dyn_cast<CastInst>(Op1))
-      if (Op0C->getOpcode() == Op1C->getOpcode()) { // same cast kind ?
-        const Type *SrcTy = Op0C->getOperand(0)->getType();
-        if (SrcTy == Op1C->getOperand(0)->getType() &&
-            SrcTy->isIntOrIntVector() &&
-            // Only do this if the casts both really cause code to be generated.
-            ValueRequiresCast(Op0C->getOpcode(), Op0C->getOperand(0),
-                              I.getType()) &&
-            ValueRequiresCast(Op1C->getOpcode(), Op1C->getOperand(0), 
-                              I.getType())) {
-          Value *NewOp = Builder->CreateAnd(Op0C->getOperand(0),
-                                            Op1C->getOperand(0), I.getName());
+    if (CastInst *Op1C = dyn_cast<CastInst>(Op1)) {
+      const Type *SrcTy = Op0C->getOperand(0)->getType();
+      if (Op0C->getOpcode() == Op1C->getOpcode() && // same cast kind ?
+          SrcTy == Op1C->getOperand(0)->getType() &&
+          SrcTy->isIntOrIntVector()) {
+        Value *Op0COp = Op0C->getOperand(0), *Op1COp = Op1C->getOperand(0);
+        
+        // Only do this if the casts both really cause code to be generated.
+        if (ShouldOptimizeCast(Op0C->getOpcode(), Op0COp, I.getType()) &&
+            ShouldOptimizeCast(Op1C->getOpcode(), Op1COp, I.getType())) {
+          Value *NewOp = Builder->CreateAnd(Op0COp, Op1COp, I.getName());
           return CastInst::Create(Op0C->getOpcode(), NewOp, I.getType());
         }
+        
+        // If this is and(cast(icmp), cast(icmp)), try to fold this even if the
+        // cast is otherwise not optimizable.  This happens for vector sexts.
+        if (ICmpInst *RHS = dyn_cast<ICmpInst>(Op1COp))
+          if (ICmpInst *LHS = dyn_cast<ICmpInst>(Op0COp))
+            if (Instruction *Res = FoldAndOfICmps(I, LHS, RHS)) {
+              InsertNewInstBefore(Res, I);
+              return CastInst::Create(Op0C->getOpcode(), Res, I.getType());
+            }
+        
+        // If this is and(cast(fcmp), cast(fcmp)), try to fold this even if the
+        // cast is otherwise not optimizable.  This happens for vector sexts.
+        if (FCmpInst *RHS = dyn_cast<FCmpInst>(Op1COp))
+          if (FCmpInst *LHS = dyn_cast<FCmpInst>(Op0COp))
+            if (Instruction *Res = FoldAndOfFCmps(I, LHS, RHS)) {
+              InsertNewInstBefore(Res, I);
+              return CastInst::Create(Op0C->getOpcode(), Res, I.getType());
+            }
       }
+    }
     
   // (X >> Z) & (Y >> Z)  -> (X&Y) >> Z  for all shifts.
   if (BinaryOperator *SI1 = dyn_cast<BinaryOperator>(Op1)) {
@@ -965,13 +990,6 @@ Instruction *InstCombiner::visitAnd(BinaryOperator &I) {
       }
   }
 
-  // If and'ing two fcmp, try combine them into one.
-  if (FCmpInst *LHS = dyn_cast<FCmpInst>(I.getOperand(0))) {
-    if (FCmpInst *RHS = dyn_cast<FCmpInst>(I.getOperand(1)))
-      if (Instruction *Res = FoldAndOfFCmps(I, LHS, RHS))
-        return Res;
-  }
-
   return Changed ? &I : 0;
 }
 
@@ -1142,14 +1160,20 @@ static Instruction *MatchSelectFromAndOr(Value *A, Value *B,
                                          Value *C, Value *D) {
   // If A is not a select of -1/0, this cannot match.
   Value *Cond = 0;
-  if (!match(A, m_SExt(m_Value(Cond))))
+  if (!match(A, m_SExt(m_Value(Cond))) ||
+      !Cond->getType()->isInteger(1))
     return 0;
 
   // ((cond?-1:0)&C) | (B&(cond?0:-1)) -> cond ? C : B.
   if (match(D, m_Not(m_SExt(m_Specific(Cond)))))
     return SelectInst::Create(Cond, C, B);
+  if (match(D, m_SExt(m_Not(m_Specific(Cond)))))
+    return SelectInst::Create(Cond, C, B);
+  
   // ((cond?-1:0)&C) | ((cond?0:-1)&D) -> cond ? C : D.
-  if (match(B, m_SelectCst<0, -1>(m_Specific(Cond))))
+  if (match(B, m_Not(m_SExt(m_Specific(Cond)))))
+    return SelectInst::Create(Cond, C, D);
+  if (match(B, m_SExt(m_Not(m_Specific(Cond)))))
     return SelectInst::Create(Cond, C, D);
   return 0;
 }
@@ -1220,7 +1244,7 @@ Instruction *InstCombiner::FoldOrOfICmps(Instruction &I,
     std::swap(LHSCC, RHSCC);
   }
   
-  // At this point, we know we have have two icmp instructions
+  // At this point, we know we have two icmp instructions
   // comparing a value against two constants and or'ing the result
   // together.  Because of the above check, we know that we only have
   // ICMP_EQ, ICMP_NE, ICMP_LT, and ICMP_GT here. We also know (from the
@@ -1591,15 +1615,19 @@ Instruction *InstCombiner::visitOr(BinaryOperator &I) {
       }
     }
 
-    // (A & (C0?-1:0)) | (B & ~(C0?-1:0)) ->  C0 ? A : B, and commuted variants
-    if (Instruction *Match = MatchSelectFromAndOr(A, B, C, D))
-      return Match;
-    if (Instruction *Match = MatchSelectFromAndOr(B, A, D, C))
-      return Match;
-    if (Instruction *Match = MatchSelectFromAndOr(C, B, A, D))
-      return Match;
-    if (Instruction *Match = MatchSelectFromAndOr(D, A, B, C))
-      return Match;
+    // (A & (C0?-1:0)) | (B & ~(C0?-1:0)) ->  C0 ? A : B, and commuted variants.
+    // Don't do this for vector select idioms, the code generator doesn't handle
+    // them well yet.
+    if (!isa<VectorType>(I.getType())) {
+      if (Instruction *Match = MatchSelectFromAndOr(A, B, C, D))
+        return Match;
+      if (Instruction *Match = MatchSelectFromAndOr(B, A, D, C))
+        return Match;
+      if (Instruction *Match = MatchSelectFromAndOr(C, B, A, D))
+        return Match;
+      if (Instruction *Match = MatchSelectFromAndOr(D, A, B, C))
+        return Match;
+    }
 
     // ((A&~B)|(~A&B)) -> A^B
     if ((match(C, m_Not(m_Specific(D))) &&
@@ -1659,37 +1687,51 @@ Instruction *InstCombiner::visitOr(BinaryOperator &I) {
       if (Instruction *Res = FoldOrOfICmps(I, LHS, RHS))
         return Res;
     
+  // (fcmp uno x, c) | (fcmp uno y, c)  -> (fcmp uno x, y)
+  if (FCmpInst *LHS = dyn_cast<FCmpInst>(I.getOperand(0)))
+    if (FCmpInst *RHS = dyn_cast<FCmpInst>(I.getOperand(1)))
+      if (Instruction *Res = FoldOrOfFCmps(I, LHS, RHS))
+        return Res;
+  
   // fold (or (cast A), (cast B)) -> (cast (or A, B))
   if (CastInst *Op0C = dyn_cast<CastInst>(Op0)) {
     if (CastInst *Op1C = dyn_cast<CastInst>(Op1))
       if (Op0C->getOpcode() == Op1C->getOpcode()) {// same cast kind ?
-        if (!isa<ICmpInst>(Op0C->getOperand(0)) ||
-            !isa<ICmpInst>(Op1C->getOperand(0))) {
-          const Type *SrcTy = Op0C->getOperand(0)->getType();
-          if (SrcTy == Op1C->getOperand(0)->getType() &&
-              SrcTy->isIntOrIntVector() &&
+        const Type *SrcTy = Op0C->getOperand(0)->getType();
+        if (SrcTy == Op1C->getOperand(0)->getType() &&
+            SrcTy->isIntOrIntVector()) {
+          Value *Op0COp = Op0C->getOperand(0), *Op1COp = Op1C->getOperand(0);
+
+          if ((!isa<ICmpInst>(Op0COp) || !isa<ICmpInst>(Op1COp)) &&
               // Only do this if the casts both really cause code to be
               // generated.
-              ValueRequiresCast(Op0C->getOpcode(), Op0C->getOperand(0), 
-                                I.getType()) &&
-              ValueRequiresCast(Op1C->getOpcode(), Op1C->getOperand(0), 
-                                I.getType())) {
-            Value *NewOp = Builder->CreateOr(Op0C->getOperand(0),
-                                             Op1C->getOperand(0), I.getName());
+              ShouldOptimizeCast(Op0C->getOpcode(), Op0COp, I.getType()) &&
+              ShouldOptimizeCast(Op1C->getOpcode(), Op1COp, I.getType())) {
+            Value *NewOp = Builder->CreateOr(Op0COp, Op1COp, I.getName());
             return CastInst::Create(Op0C->getOpcode(), NewOp, I.getType());
           }
+          
+          // If this is or(cast(icmp), cast(icmp)), try to fold this even if the
+          // cast is otherwise not optimizable.  This happens for vector sexts.
+          if (ICmpInst *RHS = dyn_cast<ICmpInst>(Op1COp))
+            if (ICmpInst *LHS = dyn_cast<ICmpInst>(Op0COp))
+              if (Instruction *Res = FoldOrOfICmps(I, LHS, RHS)) {
+                InsertNewInstBefore(Res, I);
+                return CastInst::Create(Op0C->getOpcode(), Res, I.getType());
+              }
+          
+          // If this is or(cast(fcmp), cast(fcmp)), try to fold this even if the
+          // cast is otherwise not optimizable.  This happens for vector sexts.
+          if (FCmpInst *RHS = dyn_cast<FCmpInst>(Op1COp))
+            if (FCmpInst *LHS = dyn_cast<FCmpInst>(Op0COp))
+              if (Instruction *Res = FoldOrOfFCmps(I, LHS, RHS)) {
+                InsertNewInstBefore(Res, I);
+                return CastInst::Create(Op0C->getOpcode(), Res, I.getType());
+              }
         }
       }
   }
   
-    
-  // (fcmp uno x, c) | (fcmp uno y, c)  -> (fcmp uno x, y)
-  if (FCmpInst *LHS = dyn_cast<FCmpInst>(I.getOperand(0))) {
-    if (FCmpInst *RHS = dyn_cast<FCmpInst>(I.getOperand(1)))
-      if (Instruction *Res = FoldOrOfFCmps(I, LHS, RHS))
-        return Res;
-  }
-
   return Changed ? &I : 0;
 }
 
@@ -1976,10 +2018,10 @@ Instruction *InstCombiner::visitXor(BinaryOperator &I) {
         const Type *SrcTy = Op0C->getOperand(0)->getType();
         if (SrcTy == Op1C->getOperand(0)->getType() && SrcTy->isInteger() &&
             // Only do this if the casts both really cause code to be generated.
-            ValueRequiresCast(Op0C->getOpcode(), Op0C->getOperand(0), 
-                              I.getType()) &&
-            ValueRequiresCast(Op1C->getOpcode(), Op1C->getOperand(0), 
-                              I.getType())) {
+            ShouldOptimizeCast(Op0C->getOpcode(), Op0C->getOperand(0), 
+                               I.getType()) &&
+            ShouldOptimizeCast(Op1C->getOpcode(), Op1C->getOperand(0), 
+                               I.getType())) {
           Value *NewOp = Builder->CreateXor(Op0C->getOperand(0),
                                             Op1C->getOperand(0), I.getName());
           return CastInst::Create(Op0C->getOpcode(), NewOp, I.getType());
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
index 47c37c4..e501ddc 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp
@@ -230,7 +230,6 @@ Instruction *InstCombiner::SimplifyMemSet(MemSetInst *MI) {
   return 0;
 }
 
-
 /// visitCallInst - CallInst simplification.  This mostly only handles folding 
 /// of intrinsic instructions.  For normal calls, it allows visitCallSite to do
 /// the heavy lifting.
@@ -304,6 +303,60 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
   
   switch (II->getIntrinsicID()) {
   default: break;
+  case Intrinsic::objectsize: {
+    const Type *ReturnTy = CI.getType();
+    Value *Op1 = II->getOperand(1);
+    bool Min = (cast<ConstantInt>(II->getOperand(2))->getZExtValue() == 1);
+    
+    // We need target data for just about everything so depend on it.
+    if (!TD) break;
+    
+    // Get to the real allocated thing and offset as fast as possible.
+    Op1 = Op1->stripPointerCasts();
+    
+    // If we've stripped down to a single global variable that we
+    // can know the size of then just return that.
+    if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Op1)) {
+      if (GV->hasDefinitiveInitializer()) {
+        Constant *C = GV->getInitializer();
+        size_t globalSize = TD->getTypeAllocSize(C->getType());
+        return ReplaceInstUsesWith(CI, ConstantInt::get(ReturnTy, globalSize));
+      } else {
+        Constant *RetVal = ConstantInt::get(ReturnTy, Min ? 0 : -1ULL);
+        return ReplaceInstUsesWith(CI, RetVal);
+      }
+    } else if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Op1)) {
+      
+      // Only handle constant GEPs here.
+      if (CE->getOpcode() != Instruction::GetElementPtr) break;
+      GEPOperator *GEP = cast<GEPOperator>(CE);
+      
+      // Make sure we're not a constant offset from an external
+      // global.
+      Value *Operand = GEP->getPointerOperand();
+      Operand = Operand->stripPointerCasts();
+      if (GlobalVariable *GV = dyn_cast<GlobalVariable>(Operand))
+        if (!GV->hasDefinitiveInitializer()) break;
+      
+      // Get what we're pointing to and its size. 
+      const PointerType *BaseType = 
+        cast<PointerType>(Operand->getType());
+      size_t Size = TD->getTypeAllocSize(BaseType->getElementType());
+      
+      // Get the current byte offset into the thing. Use the original
+      // operand in case we're looking through a bitcast.
+      SmallVector<Value*, 8> Ops(CE->op_begin()+1, CE->op_end());
+      const PointerType *OffsetType =
+        cast<PointerType>(GEP->getPointerOperand()->getType());
+      size_t Offset = TD->getIndexedOffset(OffsetType, &Ops[0], Ops.size());
+
+      assert(Size >= Offset);
+      
+      Constant *RetVal = ConstantInt::get(ReturnTy, Size-Offset);
+      return ReplaceInstUsesWith(CI, RetVal);
+      
+    }
+  }
   case Intrinsic::bswap:
     // bswap(bswap(x)) -> x
     if (IntrinsicInst *Operand = dyn_cast<IntrinsicInst>(II->getOperand(1)))
@@ -632,18 +685,6 @@ Instruction *InstCombiner::visitCallInst(CallInst &CI) {
       return EraseInstFromFunction(CI);
     break;
   }
-  case Intrinsic::objectsize: {
-    ConstantInt *Const = cast<ConstantInt>(II->getOperand(2));
-    const Type *Ty = CI.getType();
-
-    // 0 is maximum number of bytes left, 1 is minimum number of bytes left.
-    // TODO: actually add these values, the current return values are "don't
-    // know".
-    if (Const->getZExtValue() == 0)
-      return ReplaceInstUsesWith(CI, Constant::getAllOnesValue(Ty));
-    else
-      return ReplaceInstUsesWith(CI, ConstantInt::get(Ty, 0));
-  }
   }
 
   return visitCallSite(II);
@@ -692,10 +733,14 @@ Instruction *InstCombiner::visitCallSite(CallSite CS) {
   Value *Callee = CS.getCalledValue();
 
   if (Function *CalleeF = dyn_cast<Function>(Callee))
-    if (CalleeF->getCallingConv() != CS.getCallingConv()) {
+    // If the call and callee calling conventions don't match, this call must
+    // be unreachable, as the call is undefined.
+    if (CalleeF->getCallingConv() != CS.getCallingConv() &&
+        // Only do this for calls to a function with a body.  A prototype may
+        // not actually end up matching the implementation's calling conv for a
+        // variety of reasons (e.g. it may be written in assembly).
+        !CalleeF->isDeclaration()) {
       Instruction *OldCall = CS.getInstruction();
-      // If the call and callee calling conventions don't match, this call must
-      // be unreachable, as the call is undefined.
       new StoreInst(ConstantInt::getTrue(Callee->getContext()),
                 UndefValue::get(Type::getInt1PtrTy(Callee->getContext())), 
                                   OldCall);
@@ -703,8 +748,13 @@ Instruction *InstCombiner::visitCallSite(CallSite CS) {
       // This allows ValueHandlers and custom metadata to adjust itself.
       if (!OldCall->getType()->isVoidTy())
         OldCall->replaceAllUsesWith(UndefValue::get(OldCall->getType()));
-      if (isa<CallInst>(OldCall))   // Not worth removing an invoke here.
+      if (isa<CallInst>(OldCall))
         return EraseInstFromFunction(*OldCall);
+      
+      // We cannot remove an invoke, because it would change the CFG, just
+      // change the callee to a null pointer.
+      cast<InvokeInst>(OldCall)->setOperand(0,
+                                    Constant::getNullValue(CalleeF->getType()));
       return 0;
     }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp
index 377651e..68e17e5 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp
@@ -255,17 +255,26 @@ isEliminableCastPair(
   return Instruction::CastOps(Res);
 }
 
-/// ValueRequiresCast - Return true if the cast from "V to Ty" actually results
-/// in any code being generated.  It does not require codegen if V is simple
-/// enough or if the cast can be folded into other casts.
-bool InstCombiner::ValueRequiresCast(Instruction::CastOps opcode,const Value *V,
-                                     const Type *Ty) {
+/// ShouldOptimizeCast - Return true if the cast from "V to Ty" actually
+/// results in any code being generated and is interesting to optimize out. If
+/// the cast can be eliminated by some other simple transformation, we prefer
+/// to do the simplification first.
+bool InstCombiner::ShouldOptimizeCast(Instruction::CastOps opc, const Value *V,
+                                      const Type *Ty) {
+  // Noop casts and casts of constants should be eliminated trivially.
   if (V->getType() == Ty || isa<Constant>(V)) return false;
   
-  // If this is another cast that can be eliminated, it isn't codegen either.
+  // If this is another cast that can be eliminated, we prefer to have it
+  // eliminated.
   if (const CastInst *CI = dyn_cast<CastInst>(V))
-    if (isEliminableCastPair(CI, opcode, Ty, TD))
+    if (isEliminableCastPair(CI, opc, Ty, TD))
       return false;
+  
+  // If this is a vector sext from a compare, then we don't want to break the
+  // idiom where each element of the extended vector is either zero or all ones.
+  if (opc == Instruction::SExt && isa<CmpInst>(V) && isa<VectorType>(Ty))
+    return false;
+  
   return true;
 }
 
@@ -1145,16 +1154,22 @@ Instruction *InstCombiner::visitSIToFP(CastInst &CI) {
 }
 
 Instruction *InstCombiner::visitIntToPtr(IntToPtrInst &CI) {
-  // If the source integer type is larger than the intptr_t type for
-  // this target, do a trunc to the intptr_t type, then inttoptr of it.  This
-  // allows the trunc to be exposed to other transforms.  Don't do this for
-  // extending inttoptr's, because we don't know if the target sign or zero
-  // extends to pointers.
-  if (TD && CI.getOperand(0)->getType()->getScalarSizeInBits() >
-      TD->getPointerSizeInBits()) {
-    Value *P = Builder->CreateTrunc(CI.getOperand(0),
-                                    TD->getIntPtrType(CI.getContext()), "tmp");
-    return new IntToPtrInst(P, CI.getType());
+  // If the source integer type is not the intptr_t type for this target, do a
+  // trunc or zext to the intptr_t type, then inttoptr of it.  This allows the
+  // cast to be exposed to other transforms.
+  if (TD) {
+    if (CI.getOperand(0)->getType()->getScalarSizeInBits() >
+        TD->getPointerSizeInBits()) {
+      Value *P = Builder->CreateTrunc(CI.getOperand(0),
+                                      TD->getIntPtrType(CI.getContext()), "tmp");
+      return new IntToPtrInst(P, CI.getType());
+    }
+    if (CI.getOperand(0)->getType()->getScalarSizeInBits() <
+        TD->getPointerSizeInBits()) {
+      Value *P = Builder->CreateZExt(CI.getOperand(0),
+                                     TD->getIntPtrType(CI.getContext()), "tmp");
+      return new IntToPtrInst(P, CI.getType());
+    }
   }
   
   if (Instruction *I = commonCastTransforms(CI))
@@ -1216,17 +1231,22 @@ Instruction *InstCombiner::commonPointerCastTransforms(CastInst &CI) {
 }
 
 Instruction *InstCombiner::visitPtrToInt(PtrToIntInst &CI) {
-  // If the destination integer type is smaller than the intptr_t type for
-  // this target, do a ptrtoint to intptr_t then do a trunc.  This allows the
-  // trunc to be exposed to other transforms.  Don't do this for extending
-  // ptrtoint's, because we don't know if the target sign or zero extends its
-  // pointers.
-  if (TD &&
-      CI.getType()->getScalarSizeInBits() < TD->getPointerSizeInBits()) {
-    Value *P = Builder->CreatePtrToInt(CI.getOperand(0),
-                                       TD->getIntPtrType(CI.getContext()),
-                                       "tmp");
-    return new TruncInst(P, CI.getType());
+  // If the destination integer type is not the intptr_t type for this target,
+  // do a ptrtoint to intptr_t then do a trunc or zext.  This allows the cast
+  // to be exposed to other transforms.
+  if (TD) {
+    if (CI.getType()->getScalarSizeInBits() < TD->getPointerSizeInBits()) {
+      Value *P = Builder->CreatePtrToInt(CI.getOperand(0),
+                                         TD->getIntPtrType(CI.getContext()),
+                                         "tmp");
+      return new TruncInst(P, CI.getType());
+    }
+    if (CI.getType()->getScalarSizeInBits() > TD->getPointerSizeInBits()) {
+      Value *P = Builder->CreatePtrToInt(CI.getOperand(0),
+                                         TD->getIntPtrType(CI.getContext()),
+                                         "tmp");
+      return new ZExtInst(P, CI.getType());
+    }
   }
   
   return commonPointerCastTransforms(CI);
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index e59406c..7c00c2c 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -1589,24 +1589,24 @@ Instruction *InstCombiner::visitICmpInstWithCastAndCast(ICmpInst &ICI) {
 
 Instruction *InstCombiner::visitICmpInst(ICmpInst &I) {
   bool Changed = false;
+  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
   
   /// Orders the operands of the compare so that they are listed from most
   /// complex to least complex.  This puts constants before unary operators,
   /// before binary operators.
-  if (getComplexity(I.getOperand(0)) < getComplexity(I.getOperand(1))) {
+  if (getComplexity(Op0) < getComplexity(Op1)) {
     I.swapOperands();
+    std::swap(Op0, Op1);
     Changed = true;
   }
   
-  Value *Op0 = I.getOperand(0), *Op1 = I.getOperand(1);
-  
   if (Value *V = SimplifyICmpInst(I.getPredicate(), Op0, Op1, TD))
     return ReplaceInstUsesWith(I, V);
   
   const Type *Ty = Op0->getType();
 
   // icmp's with boolean values can always be turned into bitwise operations
-  if (Ty == Type::getInt1Ty(I.getContext())) {
+  if (Ty->isInteger(1)) {
     switch (I.getPredicate()) {
     default: llvm_unreachable("Invalid icmp instruction!");
     case ICmpInst::ICMP_EQ: {               // icmp eq i1 A, B -> ~(A^B)
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
index ae728dd..2d13298 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
@@ -115,8 +115,9 @@ static Instruction *InstCombineLoadCast(InstCombiner &IC, LoadInst &LI,
         // Okay, we are casting from one integer or pointer type to another of
         // the same size.  Instead of casting the pointer before the load, cast
         // the result of the loaded value.
-        Value *NewLoad = 
+        LoadInst *NewLoad = 
           IC.Builder->CreateLoad(CastOp, LI.isVolatile(), CI->getName());
+        NewLoad->setAlignment(LI.getAlignment());
         // Now cast the result of the load.
         return new BitCastInst(NewLoad, LI.getType());
       }
@@ -199,12 +200,15 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
     //
     if (SelectInst *SI = dyn_cast<SelectInst>(Op)) {
       // load (select (Cond, &V1, &V2))  --> select(Cond, load &V1, load &V2).
-      if (isSafeToLoadUnconditionally(SI->getOperand(1), SI) &&
-          isSafeToLoadUnconditionally(SI->getOperand(2), SI)) {
-        Value *V1 = Builder->CreateLoad(SI->getOperand(1),
-                                        SI->getOperand(1)->getName()+".val");
-        Value *V2 = Builder->CreateLoad(SI->getOperand(2),
-                                        SI->getOperand(2)->getName()+".val");
+      unsigned Align = LI.getAlignment();
+      if (isSafeToLoadUnconditionally(SI->getOperand(1), SI, Align, TD) &&
+          isSafeToLoadUnconditionally(SI->getOperand(2), SI, Align, TD)) {
+        LoadInst *V1 = Builder->CreateLoad(SI->getOperand(1),
+                                           SI->getOperand(1)->getName()+".val");
+        LoadInst *V2 = Builder->CreateLoad(SI->getOperand(2),
+                                           SI->getOperand(2)->getName()+".val");
+        V1->setAlignment(Align);
+        V2->setAlignment(Align);
         return SelectInst::Create(SI->getCondition(), V1, V2);
       }
 
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
index 74a1b68..53a5684 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
@@ -138,11 +138,11 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
     return 0;
   
   APInt LHSKnownZero(BitWidth, 0), LHSKnownOne(BitWidth, 0);
-  APInt &RHSKnownZero = KnownZero, &RHSKnownOne = KnownOne;
+  APInt RHSKnownZero(BitWidth, 0), RHSKnownOne(BitWidth, 0);
 
   Instruction *I = dyn_cast<Instruction>(V);
   if (!I) {
-    ComputeMaskedBits(V, DemandedMask, RHSKnownZero, RHSKnownOne, Depth);
+    ComputeMaskedBits(V, DemandedMask, KnownZero, KnownOne, Depth);
     return 0;        // Only analyze instructions.
   }
 
@@ -219,7 +219,7 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
   
   switch (I->getOpcode()) {
   default:
-    ComputeMaskedBits(I, DemandedMask, RHSKnownZero, RHSKnownOne, Depth);
+    ComputeMaskedBits(I, DemandedMask, KnownZero, KnownOne, Depth);
     break;
   case Instruction::And:
     // If either the LHS or the RHS are Zero, the result is zero.
@@ -249,9 +249,9 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       return I;
       
     // Output known-1 bits are only known if set in both the LHS & RHS.
-    RHSKnownOne &= LHSKnownOne;
+    KnownOne = RHSKnownOne & LHSKnownOne;
     // Output known-0 are known to be clear if zero in either the LHS | RHS.
-    RHSKnownZero |= LHSKnownZero;
+    KnownZero = RHSKnownZero | LHSKnownZero;
     break;
   case Instruction::Or:
     // If either the LHS or the RHS are One, the result is One.
@@ -286,9 +286,9 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       return I;
           
     // Output known-0 bits are only known if clear in both the LHS & RHS.
-    RHSKnownZero &= LHSKnownZero;
+    KnownZero = RHSKnownZero & LHSKnownZero;
     // Output known-1 are known to be set if set in either the LHS | RHS.
-    RHSKnownOne |= LHSKnownOne;
+    KnownOne = RHSKnownOne | LHSKnownOne;
     break;
   case Instruction::Xor: {
     if (SimplifyDemandedBits(I->getOperandUse(1), DemandedMask,
@@ -306,13 +306,6 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
     if ((DemandedMask & LHSKnownZero) == DemandedMask)
       return I->getOperand(1);
     
-    // Output known-0 bits are known if clear or set in both the LHS & RHS.
-    APInt KnownZeroOut = (RHSKnownZero & LHSKnownZero) | 
-                         (RHSKnownOne & LHSKnownOne);
-    // Output known-1 are known to be set if set in only one of the LHS, RHS.
-    APInt KnownOneOut = (RHSKnownZero & LHSKnownOne) | 
-                        (RHSKnownOne & LHSKnownZero);
-    
     // If all of the demanded bits are known to be zero on one side or the
     // other, turn this into an *inclusive* or.
     //    e.g. (A & C1)^(B & C2) -> (A & C1)|(B & C2) iff C1&C2 == 0
@@ -368,10 +361,11 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
           BinaryOperator::CreateXor(NewAnd, XorC, "tmp");
         return InsertNewInstBefore(NewXor, *I);
       }
-          
-          
-    RHSKnownZero = KnownZeroOut;
-    RHSKnownOne  = KnownOneOut;
+
+    // Output known-0 bits are known if clear or set in both the LHS & RHS.
+    KnownZero= (RHSKnownZero & LHSKnownZero) | (RHSKnownOne & LHSKnownOne);
+    // Output known-1 are known to be set if set in only one of the LHS, RHS.
+    KnownOne = (RHSKnownZero & LHSKnownOne) | (RHSKnownOne & LHSKnownZero);
     break;
   }
   case Instruction::Select:
@@ -389,61 +383,61 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       return I;
     
     // Only known if known in both the LHS and RHS.
-    RHSKnownOne &= LHSKnownOne;
-    RHSKnownZero &= LHSKnownZero;
+    KnownOne = RHSKnownOne & LHSKnownOne;
+    KnownZero = RHSKnownZero & LHSKnownZero;
     break;
   case Instruction::Trunc: {
     unsigned truncBf = I->getOperand(0)->getType()->getScalarSizeInBits();
     DemandedMask.zext(truncBf);
-    RHSKnownZero.zext(truncBf);
-    RHSKnownOne.zext(truncBf);
+    KnownZero.zext(truncBf);
+    KnownOne.zext(truncBf);
     if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMask, 
-                             RHSKnownZero, RHSKnownOne, Depth+1))
+                             KnownZero, KnownOne, Depth+1))
       return I;
     DemandedMask.trunc(BitWidth);
-    RHSKnownZero.trunc(BitWidth);
-    RHSKnownOne.trunc(BitWidth);
-    assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?"); 
+    KnownZero.trunc(BitWidth);
+    KnownOne.trunc(BitWidth);
+    assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?"); 
     break;
   }
   case Instruction::BitCast:
     if (!I->getOperand(0)->getType()->isIntOrIntVector())
-      return false;  // vector->int or fp->int?
+      return 0;  // vector->int or fp->int?
 
     if (const VectorType *DstVTy = dyn_cast<VectorType>(I->getType())) {
       if (const VectorType *SrcVTy =
             dyn_cast<VectorType>(I->getOperand(0)->getType())) {
         if (DstVTy->getNumElements() != SrcVTy->getNumElements())
           // Don't touch a bitcast between vectors of different element counts.
-          return false;
+          return 0;
       } else
         // Don't touch a scalar-to-vector bitcast.
-        return false;
+        return 0;
     } else if (isa<VectorType>(I->getOperand(0)->getType()))
       // Don't touch a vector-to-scalar bitcast.
-      return false;
+      return 0;
 
     if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMask,
-                             RHSKnownZero, RHSKnownOne, Depth+1))
+                             KnownZero, KnownOne, Depth+1))
       return I;
-    assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?"); 
+    assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?"); 
     break;
   case Instruction::ZExt: {
     // Compute the bits in the result that are not present in the input.
     unsigned SrcBitWidth =I->getOperand(0)->getType()->getScalarSizeInBits();
     
     DemandedMask.trunc(SrcBitWidth);
-    RHSKnownZero.trunc(SrcBitWidth);
-    RHSKnownOne.trunc(SrcBitWidth);
+    KnownZero.trunc(SrcBitWidth);
+    KnownOne.trunc(SrcBitWidth);
     if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMask,
-                             RHSKnownZero, RHSKnownOne, Depth+1))
+                             KnownZero, KnownOne, Depth+1))
       return I;
     DemandedMask.zext(BitWidth);
-    RHSKnownZero.zext(BitWidth);
-    RHSKnownOne.zext(BitWidth);
-    assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?"); 
+    KnownZero.zext(BitWidth);
+    KnownOne.zext(BitWidth);
+    assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?"); 
     // The top bits are known to be zero.
-    RHSKnownZero |= APInt::getHighBitsSet(BitWidth, BitWidth - SrcBitWidth);
+    KnownZero |= APInt::getHighBitsSet(BitWidth, BitWidth - SrcBitWidth);
     break;
   }
   case Instruction::SExt: {
@@ -460,27 +454,27 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       InputDemandedBits.set(SrcBitWidth-1);
       
     InputDemandedBits.trunc(SrcBitWidth);
-    RHSKnownZero.trunc(SrcBitWidth);
-    RHSKnownOne.trunc(SrcBitWidth);
+    KnownZero.trunc(SrcBitWidth);
+    KnownOne.trunc(SrcBitWidth);
     if (SimplifyDemandedBits(I->getOperandUse(0), InputDemandedBits,
-                             RHSKnownZero, RHSKnownOne, Depth+1))
+                             KnownZero, KnownOne, Depth+1))
       return I;
     InputDemandedBits.zext(BitWidth);
-    RHSKnownZero.zext(BitWidth);
-    RHSKnownOne.zext(BitWidth);
-    assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?"); 
+    KnownZero.zext(BitWidth);
+    KnownOne.zext(BitWidth);
+    assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?"); 
       
     // If the sign bit of the input is known set or clear, then we know the
     // top bits of the result.
 
     // If the input sign bit is known zero, or if the NewBits are not demanded
     // convert this into a zero extension.
-    if (RHSKnownZero[SrcBitWidth-1] || (NewBits & ~DemandedMask) == NewBits) {
+    if (KnownZero[SrcBitWidth-1] || (NewBits & ~DemandedMask) == NewBits) {
       // Convert to ZExt cast
       CastInst *NewCast = new ZExtInst(I->getOperand(0), VTy, I->getName());
       return InsertNewInstBefore(NewCast, *I);
-    } else if (RHSKnownOne[SrcBitWidth-1]) {    // Input sign bit known set
-      RHSKnownOne |= NewBits;
+    } else if (KnownOne[SrcBitWidth-1]) {    // Input sign bit known set
+      KnownOne |= NewBits;
     }
     break;
   }
@@ -540,12 +534,12 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       
       // Bits are known one if they are known zero in one operand and one in the
       // other, and there is no input carry.
-      RHSKnownOne = ((LHSKnownZero & RHSVal) | 
-                     (LHSKnownOne & ~RHSVal)) & ~CarryBits;
+      KnownOne = ((LHSKnownZero & RHSVal) | 
+                  (LHSKnownOne & ~RHSVal)) & ~CarryBits;
       
       // Bits are known zero if they are known zero in both operands and there
       // is no input carry.
-      RHSKnownZero = LHSKnownZero & ~RHSVal & ~CarryBits;
+      KnownZero = LHSKnownZero & ~RHSVal & ~CarryBits;
     } else {
       // If the high-bits of this ADD are not demanded, then it does not demand
       // the high bits of its LHS or RHS.
@@ -578,21 +572,21 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
     }
     // Otherwise just hand the sub off to ComputeMaskedBits to fill in
     // the known zeros and ones.
-    ComputeMaskedBits(V, DemandedMask, RHSKnownZero, RHSKnownOne, Depth);
+    ComputeMaskedBits(V, DemandedMask, KnownZero, KnownOne, Depth);
     break;
   case Instruction::Shl:
     if (ConstantInt *SA = dyn_cast<ConstantInt>(I->getOperand(1))) {
       uint64_t ShiftAmt = SA->getLimitedValue(BitWidth);
       APInt DemandedMaskIn(DemandedMask.lshr(ShiftAmt));
       if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMaskIn, 
-                               RHSKnownZero, RHSKnownOne, Depth+1))
+                               KnownZero, KnownOne, Depth+1))
         return I;
-      assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?");
-      RHSKnownZero <<= ShiftAmt;
-      RHSKnownOne  <<= ShiftAmt;
+      assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?");
+      KnownZero <<= ShiftAmt;
+      KnownOne  <<= ShiftAmt;
       // low bits known zero.
       if (ShiftAmt)
-        RHSKnownZero |= APInt::getLowBitsSet(BitWidth, ShiftAmt);
+        KnownZero |= APInt::getLowBitsSet(BitWidth, ShiftAmt);
     }
     break;
   case Instruction::LShr:
@@ -603,15 +597,15 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       // Unsigned shift right.
       APInt DemandedMaskIn(DemandedMask.shl(ShiftAmt));
       if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMaskIn,
-                               RHSKnownZero, RHSKnownOne, Depth+1))
+                               KnownZero, KnownOne, Depth+1))
         return I;
-      assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?");
-      RHSKnownZero = APIntOps::lshr(RHSKnownZero, ShiftAmt);
-      RHSKnownOne  = APIntOps::lshr(RHSKnownOne, ShiftAmt);
+      assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?");
+      KnownZero = APIntOps::lshr(KnownZero, ShiftAmt);
+      KnownOne  = APIntOps::lshr(KnownOne, ShiftAmt);
       if (ShiftAmt) {
         // Compute the new bits that are at the top now.
         APInt HighBits(APInt::getHighBitsSet(BitWidth, ShiftAmt));
-        RHSKnownZero |= HighBits;  // high bits known zero.
+        KnownZero |= HighBits;  // high bits known zero.
       }
     }
     break;
@@ -642,13 +636,13 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       if (DemandedMask.countLeadingZeros() <= ShiftAmt)
         DemandedMaskIn.set(BitWidth-1);
       if (SimplifyDemandedBits(I->getOperandUse(0), DemandedMaskIn,
-                               RHSKnownZero, RHSKnownOne, Depth+1))
+                               KnownZero, KnownOne, Depth+1))
         return I;
-      assert(!(RHSKnownZero & RHSKnownOne) && "Bits known to be one AND zero?");
+      assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?");
       // Compute the new bits that are at the top now.
       APInt HighBits(APInt::getHighBitsSet(BitWidth, ShiftAmt));
-      RHSKnownZero = APIntOps::lshr(RHSKnownZero, ShiftAmt);
-      RHSKnownOne  = APIntOps::lshr(RHSKnownOne, ShiftAmt);
+      KnownZero = APIntOps::lshr(KnownZero, ShiftAmt);
+      KnownOne  = APIntOps::lshr(KnownOne, ShiftAmt);
         
       // Handle the sign bits.
       APInt SignBit(APInt::getSignBit(BitWidth));
@@ -657,14 +651,14 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
         
       // If the input sign bit is known to be zero, or if none of the top bits
       // are demanded, turn this into an unsigned shift right.
-      if (BitWidth <= ShiftAmt || RHSKnownZero[BitWidth-ShiftAmt-1] || 
+      if (BitWidth <= ShiftAmt || KnownZero[BitWidth-ShiftAmt-1] || 
           (HighBits & ~DemandedMask) == HighBits) {
         // Perform the logical shift right.
         Instruction *NewVal = BinaryOperator::CreateLShr(
                           I->getOperand(0), SA, I->getName());
         return InsertNewInstBefore(NewVal, *I);
-      } else if ((RHSKnownOne & SignBit) != 0) { // New bits are known one.
-        RHSKnownOne |= HighBits;
+      } else if ((KnownOne & SignBit) != 0) { // New bits are known one.
+        KnownOne |= HighBits;
       }
     }
     break;
@@ -681,10 +675,19 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
                                  LHSKnownZero, LHSKnownOne, Depth+1))
           return I;
 
+        // The low bits of LHS are unchanged by the srem.
+        KnownZero = LHSKnownZero & LowBits;
+        KnownOne = LHSKnownOne & LowBits;
+
+        // If LHS is non-negative or has all low bits zero, then the upper bits
+        // are all zero.
         if (LHSKnownZero[BitWidth-1] || ((LHSKnownZero & LowBits) == LowBits))
-          LHSKnownZero |= ~LowBits;
+          KnownZero |= ~LowBits;
 
-        KnownZero |= LHSKnownZero & DemandedMask;
+        // If LHS is negative and not all low bits are zero, then the upper bits
+        // are all one.
+        if (LHSKnownOne[BitWidth-1] && ((LHSKnownOne & LowBits) != 0))
+          KnownOne |= ~LowBits;
 
         assert(!(KnownZero & KnownOne) && "Bits known to be one AND zero?"); 
       }
@@ -743,15 +746,15 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
       }
       }
     }
-    ComputeMaskedBits(V, DemandedMask, RHSKnownZero, RHSKnownOne, Depth);
+    ComputeMaskedBits(V, DemandedMask, KnownZero, KnownOne, Depth);
     break;
   }
   
   // If the client is only demanding bits that we know, return the known
   // constant.
-  if ((DemandedMask & (RHSKnownZero|RHSKnownOne)) == DemandedMask)
-    return Constant::getIntegerValue(VTy, RHSKnownOne);
-  return false;
+  if ((DemandedMask & (KnownZero|KnownOne)) == DemandedMask)
+    return Constant::getIntegerValue(VTy, KnownOne);
+  return 0;
 }
 
 
@@ -764,7 +767,7 @@ Value *InstCombiner::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
 /// operation, the operation is simplified, then the resultant value is
 /// returned.  This returns null if no change was made.
 Value *InstCombiner::SimplifyDemandedVectorElts(Value *V, APInt DemandedElts,
-                                                APInt& UndefElts,
+                                                APInt &UndefElts,
                                                 unsigned Depth) {
   unsigned VWidth = cast<VectorType>(V->getType())->getNumElements();
   APInt EltMask(APInt::getAllOnesValue(VWidth));
@@ -774,13 +777,15 @@ Value *InstCombiner::SimplifyDemandedVectorElts(Value *V, APInt DemandedElts,
     // If the entire vector is undefined, just return this info.
     UndefElts = EltMask;
     return 0;
-  } else if (DemandedElts == 0) { // If nothing is demanded, provide undef.
+  }
+  
+  if (DemandedElts == 0) { // If nothing is demanded, provide undef.
     UndefElts = EltMask;
     return UndefValue::get(V->getType());
   }
 
   UndefElts = 0;
-  if (ConstantVector *CP = dyn_cast<ConstantVector>(V)) {
+  if (ConstantVector *CV = dyn_cast<ConstantVector>(V)) {
     const Type *EltTy = cast<VectorType>(V->getType())->getElementType();
     Constant *Undef = UndefValue::get(EltTy);
 
@@ -789,23 +794,25 @@ Value *InstCombiner::SimplifyDemandedVectorElts(Value *V, APInt DemandedElts,
       if (!DemandedElts[i]) {   // If not demanded, set to undef.
         Elts.push_back(Undef);
         UndefElts.set(i);
-      } else if (isa<UndefValue>(CP->getOperand(i))) {   // Already undef.
+      } else if (isa<UndefValue>(CV->getOperand(i))) {   // Already undef.
         Elts.push_back(Undef);
         UndefElts.set(i);
       } else {                               // Otherwise, defined.
-        Elts.push_back(CP->getOperand(i));
+        Elts.push_back(CV->getOperand(i));
       }
 
     // If we changed the constant, return it.
     Constant *NewCP = ConstantVector::get(Elts);
-    return NewCP != CP ? NewCP : 0;
-  } else if (isa<ConstantAggregateZero>(V)) {
+    return NewCP != CV ? NewCP : 0;
+  }
+  
+  if (isa<ConstantAggregateZero>(V)) {
     // Simplify the CAZ to a ConstantVector where the non-demanded elements are
     // set to undef.
     
     // Check if this is identity. If so, return 0 since we are not simplifying
     // anything.
-    if (DemandedElts == ((1ULL << VWidth) -1))
+    if (DemandedElts.isAllOnesValue())
       return 0;
     
     const Type *EltTy = cast<VectorType>(V->getType())->getElementType();
diff --git a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
index f11f557..20fda1a 100644
--- a/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/InstCombine/InstCombineVectorOps.cpp
@@ -162,7 +162,8 @@ Instruction *InstCombiner::visitExtractElementInst(ExtractElementInst &EI) {
     // property.
     if (EI.getOperand(0)->hasOneUse() && VectorWidth != 1) {
       APInt UndefElts(VectorWidth, 0);
-      APInt DemandedMask(VectorWidth, 1 << IndexVal);
+      APInt DemandedMask(VectorWidth, 0);
+      DemandedMask.set(IndexVal);
       if (Value *V = SimplifyDemandedVectorElts(EI.getOperand(0),
                                                 DemandedMask, UndefElts)) {
         EI.setOperand(0, V);
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
index c3139a5..21e6f89 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp
@@ -32,7 +32,6 @@
 #include "llvm/ADT/SmallSet.h"
 #include "llvm/Assembly/Writer.h"
 #include "llvm/Support/CallSite.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/GetElementPtrTypeIterator.h"
 #include "llvm/Support/PatternMatch.h"
@@ -40,9 +39,6 @@
 using namespace llvm;
 using namespace llvm::PatternMatch;
 
-static cl::opt<bool> FactorCommonPreds("split-critical-paths-tweak",
-                                       cl::init(false), cl::Hidden);
-
 namespace {
   class CodeGenPrepare : public FunctionPass {
     /// TLI - Keep a pointer of a TargetLowering to consult for determining
@@ -63,6 +59,10 @@ namespace {
       AU.addPreserved<ProfileInfo>();
     }
 
+    virtual void releaseMemory() {
+      BackEdges.clear();
+    }
+
   private:
     bool EliminateMostlyEmptyBlocks(Function &F);
     bool CanMergeBlocks(const BasicBlock *BB, const BasicBlock *DestBB) const;
@@ -297,6 +297,70 @@ void CodeGenPrepare::EliminateMostlyEmptyBlock(BasicBlock *BB) {
   DEBUG(dbgs() << "AFTER:\n" << *DestBB << "\n\n\n");
 }
 
+/// FindReusablePredBB - Check all of the predecessors of the block DestPHI
+/// lives in to see if there is a block that we can reuse as a critical edge
+/// from TIBB.
+static BasicBlock *FindReusablePredBB(PHINode *DestPHI, BasicBlock *TIBB) {
+  BasicBlock *Dest = DestPHI->getParent();
+  
+  /// TIPHIValues - This array is lazily computed to determine the values of
+  /// PHIs in Dest that TI would provide.
+  SmallVector<Value*, 32> TIPHIValues;
+  
+  /// TIBBEntryNo - This is a cache to speed up pred queries for TIBB.
+  unsigned TIBBEntryNo = 0;
+  
+  // Check to see if Dest has any blocks that can be used as a split edge for
+  // this terminator.
+  for (unsigned pi = 0, e = DestPHI->getNumIncomingValues(); pi != e; ++pi) {
+    BasicBlock *Pred = DestPHI->getIncomingBlock(pi);
+    // To be usable, the pred has to end with an uncond branch to the dest.
+    BranchInst *PredBr = dyn_cast<BranchInst>(Pred->getTerminator());
+    if (!PredBr || !PredBr->isUnconditional())
+      continue;
+    // Must be empty other than the branch and debug info.
+    BasicBlock::iterator I = Pred->begin();
+    while (isa<DbgInfoIntrinsic>(I))
+      I++;
+    if (&*I != PredBr)
+      continue;
+    // Cannot be the entry block; its label does not get emitted.
+    if (Pred == &Dest->getParent()->getEntryBlock())
+      continue;
+    
+    // Finally, since we know that Dest has phi nodes in it, we have to make
+    // sure that jumping to Pred will have the same effect as going to Dest in
+    // terms of PHI values.
+    PHINode *PN;
+    unsigned PHINo = 0;
+    unsigned PredEntryNo = pi;
+    
+    bool FoundMatch = true;
+    for (BasicBlock::iterator I = Dest->begin();
+         (PN = dyn_cast<PHINode>(I)); ++I, ++PHINo) {
+      if (PHINo == TIPHIValues.size()) {
+        if (PN->getIncomingBlock(TIBBEntryNo) != TIBB)
+          TIBBEntryNo = PN->getBasicBlockIndex(TIBB);
+        TIPHIValues.push_back(PN->getIncomingValue(TIBBEntryNo));
+      }
+      
+      // If the PHI entry doesn't work, we can't use this pred.
+      if (PN->getIncomingBlock(PredEntryNo) != Pred)
+        PredEntryNo = PN->getBasicBlockIndex(Pred);
+      
+      if (TIPHIValues[PHINo] != PN->getIncomingValue(PredEntryNo)) {
+        FoundMatch = false;
+        break;
+      }
+    }
+    
+    // If we found a workable predecessor, change TI to branch to Succ.
+    if (FoundMatch)
+      return Pred;
+  }
+  return 0;  
+}
+
 
 /// SplitEdgeNicely - Split the critical edge from TI to its specified
 /// successor if it will improve codegen.  We only do this if the successor has
@@ -311,13 +375,12 @@ static void SplitEdgeNicely(TerminatorInst *TI, unsigned SuccNum,
   BasicBlock *Dest = TI->getSuccessor(SuccNum);
   assert(isa<PHINode>(Dest->begin()) &&
          "This should only be called if Dest has a PHI!");
+  PHINode *DestPHI = cast<PHINode>(Dest->begin());
 
   // Do not split edges to EH landing pads.
-  if (InvokeInst *Invoke = dyn_cast<InvokeInst>(TI)) {
+  if (InvokeInst *Invoke = dyn_cast<InvokeInst>(TI))
     if (Invoke->getSuccessor(1) == Dest)
       return;
-  }
-  
 
   // As a hack, never split backedges of loops.  Even though the copy for any
   // PHIs inserted on the backedge would be dead for exits from the loop, we
@@ -325,92 +388,16 @@ static void SplitEdgeNicely(TerminatorInst *TI, unsigned SuccNum,
   if (BackEdges.count(std::make_pair(TIBB, Dest)))
     return;
 
-  if (!FactorCommonPreds) {
-    /// TIPHIValues - This array is lazily computed to determine the values of
-    /// PHIs in Dest that TI would provide.
-    SmallVector<Value*, 32> TIPHIValues;
-
-    // Check to see if Dest has any blocks that can be used as a split edge for
-    // this terminator.
-    for (pred_iterator PI = pred_begin(Dest), E = pred_end(Dest); PI != E; ++PI) {
-      BasicBlock *Pred = *PI;
-      // To be usable, the pred has to end with an uncond branch to the dest.
-      BranchInst *PredBr = dyn_cast<BranchInst>(Pred->getTerminator());
-      if (!PredBr || !PredBr->isUnconditional())
-        continue;
-      // Must be empty other than the branch and debug info.
-      BasicBlock::iterator I = Pred->begin();
-      while (isa<DbgInfoIntrinsic>(I))
-        I++;
-      if (dyn_cast<Instruction>(I) != PredBr)
-        continue;
-      // Cannot be the entry block; its label does not get emitted.
-      if (Pred == &(Dest->getParent()->getEntryBlock()))
-        continue;
-
-      // Finally, since we know that Dest has phi nodes in it, we have to make
-      // sure that jumping to Pred will have the same effect as going to Dest in
-      // terms of PHI values.
-      PHINode *PN;
-      unsigned PHINo = 0;
-      bool FoundMatch = true;
-      for (BasicBlock::iterator I = Dest->begin();
-           (PN = dyn_cast<PHINode>(I)); ++I, ++PHINo) {
-        if (PHINo == TIPHIValues.size())
-          TIPHIValues.push_back(PN->getIncomingValueForBlock(TIBB));
-
-        // If the PHI entry doesn't work, we can't use this pred.
-        if (TIPHIValues[PHINo] != PN->getIncomingValueForBlock(Pred)) {
-          FoundMatch = false;
-          break;
-        }
-      }
-
-      // If we found a workable predecessor, change TI to branch to Succ.
-      if (FoundMatch) {
-        ProfileInfo *PFI = P->getAnalysisIfAvailable<ProfileInfo>();
-        if (PFI)
-          PFI->splitEdge(TIBB, Dest, Pred);
-        Dest->removePredecessor(TIBB);
-        TI->setSuccessor(SuccNum, Pred);
-        return;
-      }
-    }
-
-    SplitCriticalEdge(TI, SuccNum, P, true);
+  if (BasicBlock *ReuseBB = FindReusablePredBB(DestPHI, TIBB)) {
+    ProfileInfo *PFI = P->getAnalysisIfAvailable<ProfileInfo>();
+    if (PFI)
+      PFI->splitEdge(TIBB, Dest, ReuseBB);
+    Dest->removePredecessor(TIBB);
+    TI->setSuccessor(SuccNum, ReuseBB);
     return;
   }
 
-  PHINode *PN;
-  SmallVector<Value*, 8> TIPHIValues;
-  for (BasicBlock::iterator I = Dest->begin();
-       (PN = dyn_cast<PHINode>(I)); ++I)
-    TIPHIValues.push_back(PN->getIncomingValueForBlock(TIBB));
-
-  SmallVector<BasicBlock*, 8> IdenticalPreds;
-  for (pred_iterator PI = pred_begin(Dest), E = pred_end(Dest); PI != E; ++PI) {
-    BasicBlock *Pred = *PI;
-    if (BackEdges.count(std::make_pair(Pred, Dest)))
-      continue;
-    if (PI == TIBB)
-      IdenticalPreds.push_back(Pred);
-    else {
-      bool Identical = true;
-      unsigned PHINo = 0;
-      for (BasicBlock::iterator I = Dest->begin();
-           (PN = dyn_cast<PHINode>(I)); ++I, ++PHINo)
-        if (TIPHIValues[PHINo] != PN->getIncomingValueForBlock(Pred)) {
-          Identical = false;
-          break;
-        }
-      if (Identical)
-        IdenticalPreds.push_back(Pred);
-    }
-  }
-
-  assert(!IdenticalPreds.empty());
-  SplitBlockPredecessors(Dest, &IdenticalPreds[0], IdenticalPreds.size(),
-                         ".critedge", P);
+  SplitCriticalEdge(TI, SuccNum, P, true);
 }
 
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
index 320afa1..09c01d3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp
@@ -44,8 +44,14 @@ namespace {
 
     virtual bool runOnFunction(Function &F) {
       bool Changed = false;
+      
+      DominatorTree &DT = getAnalysis<DominatorTree>();
+      
       for (Function::iterator I = F.begin(), E = F.end(); I != E; ++I)
-        Changed |= runOnBasicBlock(*I);
+        // Only check non-dead blocks.  Dead blocks may have strange pointer
+        // cycles that will confuse alias analysis.
+        if (DT.isReachableFromEntry(I))
+          Changed |= runOnBasicBlock(*I);
       return Changed;
     }
     
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
index 292a4b3..3ce7482 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/GVN.cpp
@@ -60,6 +60,7 @@ STATISTIC(NumPRELoad,   "Number of loads PRE'd");
 static cl::opt<bool> EnablePRE("enable-pre",
                                cl::init(true), cl::Hidden);
 static cl::opt<bool> EnableLoadPRE("enable-load-pre", cl::init(true));
+static cl::opt<bool> EnableFullLoadPRE("enable-full-load-pre", cl::init(false));
 
 //===----------------------------------------------------------------------===//
 //                         ValueTable Class
@@ -1537,10 +1538,12 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
   // at least one of the values is LI.  Since this means that we won't be able
   // to eliminate LI even if we insert uses in the other predecessors, we will
   // end up increasing code size.  Reject this by scanning for LI.
-  for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i)
-    if (ValuesPerBlock[i].isSimpleValue() &&
-        ValuesPerBlock[i].getSimpleValue() == LI)
-      return false;
+  if (!EnableFullLoadPRE) {
+    for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i)
+      if (ValuesPerBlock[i].isSimpleValue() &&
+          ValuesPerBlock[i].getSimpleValue() == LI)
+        return false;
+  }
 
   // FIXME: It is extremely unclear what this loop is doing, other than
   // artificially restricting loadpre.
@@ -1564,13 +1567,9 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
       return false;
   }
 
-  // Okay, we have some hope :).  Check to see if the loaded value is fully
-  // available in all but one predecessor.
-  // FIXME: If we could restructure the CFG, we could make a common pred with
-  // all the preds that don't have an available LI and insert a new load into
-  // that one block.
-  BasicBlock *UnavailablePred = 0;
-
+  // Check to see how many predecessors have the loaded value fully
+  // available.
+  DenseMap<BasicBlock*, Value*> PredLoads;
   DenseMap<BasicBlock*, char> FullyAvailableBlocks;
   for (unsigned i = 0, e = ValuesPerBlock.size(); i != e; ++i)
     FullyAvailableBlocks[ValuesPerBlock[i].BB] = true;
@@ -1579,79 +1578,93 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
 
   for (pred_iterator PI = pred_begin(LoadBB), E = pred_end(LoadBB);
        PI != E; ++PI) {
-    if (IsValueFullyAvailableInBlock(*PI, FullyAvailableBlocks))
+    BasicBlock *Pred = *PI;
+    if (IsValueFullyAvailableInBlock(Pred, FullyAvailableBlocks)) {
       continue;
-
-    // If this load is not available in multiple predecessors, reject it.
-    if (UnavailablePred && UnavailablePred != *PI)
+    }
+    PredLoads[Pred] = 0;
+    // We don't currently handle critical edges :(
+    if (Pred->getTerminator()->getNumSuccessors() != 1) {
+      DEBUG(dbgs() << "COULD NOT PRE LOAD BECAUSE OF CRITICAL EDGE '"
+            << Pred->getName() << "': " << *LI << '\n');
       return false;
-    UnavailablePred = *PI;
+    }
   }
 
-  assert(UnavailablePred != 0 &&
+  // Decide whether PRE is profitable for this load.
+  unsigned NumUnavailablePreds = PredLoads.size();
+  assert(NumUnavailablePreds != 0 &&
          "Fully available value should be eliminated above!");
-
-  // We don't currently handle critical edges :(
-  if (UnavailablePred->getTerminator()->getNumSuccessors() != 1) {
-    DEBUG(dbgs() << "COULD NOT PRE LOAD BECAUSE OF CRITICAL EDGE '"
-                 << UnavailablePred->getName() << "': " << *LI << '\n');
-    return false;
+  if (!EnableFullLoadPRE) {
+    // If this load is unavailable in multiple predecessors, reject it.
+    // FIXME: If we could restructure the CFG, we could make a common pred with
+    // all the preds that don't have an available LI and insert a new load into
+    // that one block.
+    if (NumUnavailablePreds != 1)
+      return false;
   }
-  
-  // Do PHI translation to get its value in the predecessor if necessary.  The
-  // returned pointer (if non-null) is guaranteed to dominate UnavailablePred.
-  //
+
+  // Check if the load can safely be moved to all the unavailable predecessors.
+  bool CanDoPRE = true;
   SmallVector<Instruction*, 8> NewInsts;
-  
-  // If all preds have a single successor, then we know it is safe to insert the
-  // load on the pred (?!?), so we can insert code to materialize the pointer if
-  // it is not available.
-  PHITransAddr Address(LI->getOperand(0), TD);
-  Value *LoadPtr = 0;
-  if (allSingleSucc) {
-    LoadPtr = Address.PHITranslateWithInsertion(LoadBB, UnavailablePred,
-                                                *DT, NewInsts);
-  } else {
-    Address.PHITranslateValue(LoadBB, UnavailablePred);
-    LoadPtr = Address.getAddr();
+  for (DenseMap<BasicBlock*, Value*>::iterator I = PredLoads.begin(),
+         E = PredLoads.end(); I != E; ++I) {
+    BasicBlock *UnavailablePred = I->first;
+
+    // Do PHI translation to get its value in the predecessor if necessary.  The
+    // returned pointer (if non-null) is guaranteed to dominate UnavailablePred.
+
+    // If all preds have a single successor, then we know it is safe to insert
+    // the load on the pred (?!?), so we can insert code to materialize the
+    // pointer if it is not available.
+    PHITransAddr Address(LI->getOperand(0), TD);
+    Value *LoadPtr = 0;
+    if (allSingleSucc) {
+      LoadPtr = Address.PHITranslateWithInsertion(LoadBB, UnavailablePred,
+                                                  *DT, NewInsts);
+    } else {
+      Address.PHITranslateValue(LoadBB, UnavailablePred);
+      LoadPtr = Address.getAddr();
     
-    // Make sure the value is live in the predecessor.
-    if (Instruction *Inst = dyn_cast_or_null<Instruction>(LoadPtr))
-      if (!DT->dominates(Inst->getParent(), UnavailablePred))
-        LoadPtr = 0;
-  }
+      // Make sure the value is live in the predecessor.
+      if (Instruction *Inst = dyn_cast_or_null<Instruction>(LoadPtr))
+        if (!DT->dominates(Inst->getParent(), UnavailablePred))
+          LoadPtr = 0;
+    }
 
-  // If we couldn't find or insert a computation of this phi translated value,
-  // we fail PRE.
-  if (LoadPtr == 0) {
-    assert(NewInsts.empty() && "Shouldn't insert insts on failure");
-    DEBUG(dbgs() << "COULDN'T INSERT PHI TRANSLATED VALUE OF: "
-                 << *LI->getOperand(0) << "\n");
-    return false;
-  }
+    // If we couldn't find or insert a computation of this phi translated value,
+    // we fail PRE.
+    if (LoadPtr == 0) {
+      DEBUG(dbgs() << "COULDN'T INSERT PHI TRANSLATED VALUE OF: "
+            << *LI->getOperand(0) << "\n");
+      CanDoPRE = false;
+      break;
+    }
 
-  // Assign value numbers to these new instructions.
-  for (unsigned i = 0, e = NewInsts.size(); i != e; ++i) {
-    // FIXME: We really _ought_ to insert these value numbers into their 
-    // parent's availability map.  However, in doing so, we risk getting into
-    // ordering issues.  If a block hasn't been processed yet, we would be
-    // marking a value as AVAIL-IN, which isn't what we intend.
-    VN.lookup_or_add(NewInsts[i]);
+    // Make sure it is valid to move this load here.  We have to watch out for:
+    //  @1 = getelementptr (i8* p, ...
+    //  test p and branch if == 0
+    //  load @1
+    // It is valid to have the getelementptr before the test, even if p can be 0,
+    // as getelementptr only does address arithmetic.
+    // If we are not pushing the value through any multiple-successor blocks
+    // we do not have this case.  Otherwise, check that the load is safe to
+    // put anywhere; this can be improved, but should be conservatively safe.
+    if (!allSingleSucc &&
+        // FIXME: REEVALUTE THIS.
+        !isSafeToLoadUnconditionally(LoadPtr,
+                                     UnavailablePred->getTerminator(),
+                                     LI->getAlignment(), TD)) {
+      CanDoPRE = false;
+      break;
+    }
+
+    I->second = LoadPtr;
   }
-  
-  // Make sure it is valid to move this load here.  We have to watch out for:
-  //  @1 = getelementptr (i8* p, ...
-  //  test p and branch if == 0
-  //  load @1
-  // It is valid to have the getelementptr before the test, even if p can be 0,
-  // as getelementptr only does address arithmetic.
-  // If we are not pushing the value through any multiple-successor blocks
-  // we do not have this case.  Otherwise, check that the load is safe to
-  // put anywhere; this can be improved, but should be conservatively safe.
-  if (!allSingleSucc &&
-      // FIXME: REEVALUTE THIS.
-      !isSafeToLoadUnconditionally(LoadPtr, UnavailablePred->getTerminator())) {
-    assert(NewInsts.empty() && "Should not have inserted instructions");
+
+  if (!CanDoPRE) {
+    while (!NewInsts.empty())
+      NewInsts.pop_back_val()->eraseFromParent();
     return false;
   }
 
@@ -1663,12 +1676,28 @@ bool GVN::processNonLocalLoad(LoadInst *LI,
           dbgs() << "INSERTED " << NewInsts.size() << " INSTS: "
                  << *NewInsts.back() << '\n');
   
-  Value *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre", false,
-                                LI->getAlignment(),
-                                UnavailablePred->getTerminator());
+  // Assign value numbers to the new instructions.
+  for (unsigned i = 0, e = NewInsts.size(); i != e; ++i) {
+    // FIXME: We really _ought_ to insert these value numbers into their 
+    // parent's availability map.  However, in doing so, we risk getting into
+    // ordering issues.  If a block hasn't been processed yet, we would be
+    // marking a value as AVAIL-IN, which isn't what we intend.
+    VN.lookup_or_add(NewInsts[i]);
+  }
 
-  // Add the newly created load.
-  ValuesPerBlock.push_back(AvailableValueInBlock::get(UnavailablePred,NewLoad));
+  for (DenseMap<BasicBlock*, Value*>::iterator I = PredLoads.begin(),
+         E = PredLoads.end(); I != E; ++I) {
+    BasicBlock *UnavailablePred = I->first;
+    Value *LoadPtr = I->second;
+
+    Value *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre", false,
+                                  LI->getAlignment(),
+                                  UnavailablePred->getTerminator());
+
+    // Add the newly created load.
+    ValuesPerBlock.push_back(AvailableValueInBlock::get(UnavailablePred,
+                                                        NewLoad));
+  }
 
   // Perform PHI construction.
   Value *V = ConstructSSAForLoadSet(LI, ValuesPerBlock, TD, *DT,
@@ -1862,6 +1891,10 @@ Value *GVN::lookupNumber(BasicBlock *BB, uint32_t num) {
 /// by inserting it into the appropriate sets
 bool GVN::processInstruction(Instruction *I,
                              SmallVectorImpl<Instruction*> &toErase) {
+  // Ignore dbg info intrinsics.
+  if (isa<DbgInfoIntrinsic>(I))
+    return false;
+
   if (LoadInst *LI = dyn_cast<LoadInst>(I)) {
     bool Changed = processLoad(LI, toErase);
 
@@ -2073,7 +2106,7 @@ bool GVN::performPRE(Function &F) {
       for (pred_iterator PI = pred_begin(CurrentBlock),
            PE = pred_end(CurrentBlock); PI != PE; ++PI) {
         // We're not interested in PRE where the block is its
-        // own predecessor, on in blocks with predecessors
+        // own predecessor, or in blocks with predecessors
         // that are not reachable.
         if (*PI == CurrentBlock) {
           NumWithout = 2;
@@ -2121,10 +2154,10 @@ bool GVN::performPRE(Function &F) {
         continue;
       }
 
-      // Instantiate the expression the in predecessor that lacked it.
+      // Instantiate the expression in the predecessor that lacked it.
       // Because we are going top-down through the block, all value numbers
       // will be available in the predecessor by the time we need them.  Any
-      // that weren't original present will have been instantiated earlier
+      // that weren't originally present will have been instantiated earlier
       // in this loop.
       Instruction *PREInstr = CurInst->clone();
       bool success = true;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
index 17f7d98..5302fdc 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/IndVarSimplify.cpp
@@ -364,23 +364,17 @@ bool IndVarSimplify::runOnLoop(Loop *L, LPPassManager &LPM) {
     if (ExitingBlock)
       NeedCannIV = true;
   }
-  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-    const SCEV *Stride = IU->StrideOrder[i];
-    const Type *Ty = SE->getEffectiveSCEVType(Stride->getType());
+  for (IVUsers::const_iterator I = IU->begin(), E = IU->end(); I != E; ++I) {
+    const Type *Ty =
+      SE->getEffectiveSCEVType(I->getOperandValToReplace()->getType());
     if (!LargestType ||
         SE->getTypeSizeInBits(Ty) >
           SE->getTypeSizeInBits(LargestType))
       LargestType = Ty;
-
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[i]);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-
-    if (!SI->second->Users.empty())
-      NeedCannIV = true;
+    NeedCannIV = true;
   }
 
-  // Now that we know the largest of of the induction variable expressions
+  // Now that we know the largest of the induction variable expressions
   // in this loop, insert a canonical induction variable of the largest size.
   Value *IndVar = 0;
   if (NeedCannIV) {
@@ -455,72 +449,64 @@ void IndVarSimplify::RewriteIVExpressions(Loop *L, const Type *LargestType,
   // add the offsets to the primary induction variable and cast, avoiding
   // the need for the code evaluation methods to insert induction variables
   // of different sizes.
-  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-    const SCEV *Stride = IU->StrideOrder[i];
-
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[i]);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-    ilist<IVStrideUse> &List = SI->second->Users;
-    for (ilist<IVStrideUse>::iterator UI = List.begin(),
-         E = List.end(); UI != E; ++UI) {
-      Value *Op = UI->getOperandValToReplace();
-      const Type *UseTy = Op->getType();
-      Instruction *User = UI->getUser();
-
-      // Compute the final addrec to expand into code.
-      const SCEV *AR = IU->getReplacementExpr(*UI);
-
-      // Evaluate the expression out of the loop, if possible.
-      if (!L->contains(UI->getUser())) {
-        const SCEV *ExitVal = SE->getSCEVAtScope(AR, L->getParentLoop());
-        if (ExitVal->isLoopInvariant(L))
-          AR = ExitVal;
-      }
+  for (IVUsers::iterator UI = IU->begin(), E = IU->end(); UI != E; ++UI) {
+    const SCEV *Stride = UI->getStride();
+    Value *Op = UI->getOperandValToReplace();
+    const Type *UseTy = Op->getType();
+    Instruction *User = UI->getUser();
+
+    // Compute the final addrec to expand into code.
+    const SCEV *AR = IU->getReplacementExpr(*UI);
+
+    // Evaluate the expression out of the loop, if possible.
+    if (!L->contains(UI->getUser())) {
+      const SCEV *ExitVal = SE->getSCEVAtScope(AR, L->getParentLoop());
+      if (ExitVal->isLoopInvariant(L))
+        AR = ExitVal;
+    }
 
-      // FIXME: It is an extremely bad idea to indvar substitute anything more
-      // complex than affine induction variables.  Doing so will put expensive
-      // polynomial evaluations inside of the loop, and the str reduction pass
-      // currently can only reduce affine polynomials.  For now just disable
-      // indvar subst on anything more complex than an affine addrec, unless
-      // it can be expanded to a trivial value.
-      if (!AR->isLoopInvariant(L) && !Stride->isLoopInvariant(L))
-        continue;
+    // FIXME: It is an extremely bad idea to indvar substitute anything more
+    // complex than affine induction variables.  Doing so will put expensive
+    // polynomial evaluations inside of the loop, and the str reduction pass
+    // currently can only reduce affine polynomials.  For now just disable
+    // indvar subst on anything more complex than an affine addrec, unless
+    // it can be expanded to a trivial value.
+    if (!AR->isLoopInvariant(L) && !Stride->isLoopInvariant(L))
+      continue;
 
-      // Determine the insertion point for this user. By default, insert
-      // immediately before the user. The SCEVExpander class will automatically
-      // hoist loop invariants out of the loop. For PHI nodes, there may be
-      // multiple uses, so compute the nearest common dominator for the
-      // incoming blocks.
-      Instruction *InsertPt = User;
-      if (PHINode *PHI = dyn_cast<PHINode>(InsertPt))
-        for (unsigned i = 0, e = PHI->getNumIncomingValues(); i != e; ++i)
-          if (PHI->getIncomingValue(i) == Op) {
-            if (InsertPt == User)
-              InsertPt = PHI->getIncomingBlock(i)->getTerminator();
-            else
-              InsertPt =
-                DT->findNearestCommonDominator(InsertPt->getParent(),
-                                               PHI->getIncomingBlock(i))
-                      ->getTerminator();
-          }
-
-      // Now expand it into actual Instructions and patch it into place.
-      Value *NewVal = Rewriter.expandCodeFor(AR, UseTy, InsertPt);
-
-      // Patch the new value into place.
-      if (Op->hasName())
-        NewVal->takeName(Op);
-      User->replaceUsesOfWith(Op, NewVal);
-      UI->setOperandValToReplace(NewVal);
-      DEBUG(dbgs() << "INDVARS: Rewrote IV '" << *AR << "' " << *Op << '\n'
-                   << "   into = " << *NewVal << "\n");
-      ++NumRemoved;
-      Changed = true;
-
-      // The old value may be dead now.
-      DeadInsts.push_back(Op);
-    }
+    // Determine the insertion point for this user. By default, insert
+    // immediately before the user. The SCEVExpander class will automatically
+    // hoist loop invariants out of the loop. For PHI nodes, there may be
+    // multiple uses, so compute the nearest common dominator for the
+    // incoming blocks.
+    Instruction *InsertPt = User;
+    if (PHINode *PHI = dyn_cast<PHINode>(InsertPt))
+      for (unsigned i = 0, e = PHI->getNumIncomingValues(); i != e; ++i)
+        if (PHI->getIncomingValue(i) == Op) {
+          if (InsertPt == User)
+            InsertPt = PHI->getIncomingBlock(i)->getTerminator();
+          else
+            InsertPt =
+              DT->findNearestCommonDominator(InsertPt->getParent(),
+                                             PHI->getIncomingBlock(i))
+                    ->getTerminator();
+        }
+
+    // Now expand it into actual Instructions and patch it into place.
+    Value *NewVal = Rewriter.expandCodeFor(AR, UseTy, InsertPt);
+
+    // Patch the new value into place.
+    if (Op->hasName())
+      NewVal->takeName(Op);
+    User->replaceUsesOfWith(Op, NewVal);
+    UI->setOperandValToReplace(NewVal);
+    DEBUG(dbgs() << "INDVARS: Rewrote IV '" << *AR << "' " << *Op << '\n'
+                 << "   into = " << *NewVal << "\n");
+    ++NumRemoved;
+    Changed = true;
+
+    // The old value may be dead now.
+    DeadInsts.push_back(Op);
   }
 
   // Clear the rewriter cache, because values that are in the rewriter's cache
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
index 3eff3d8..02346a1 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/JumpThreading.cpp
@@ -336,13 +336,18 @@ ComputeValueKnownInPredecessors(Value *V, BasicBlock *BB,PredValueInfo &Result){
       else
         InterestingVal = ConstantInt::getFalse(I->getContext());
       
-      // Scan for the sentinel.
+      // Scan for the sentinel.  If we find an undef, force it to the
+      // interesting value: x|undef -> true and x&undef -> false.
       for (unsigned i = 0, e = LHSVals.size(); i != e; ++i)
-        if (LHSVals[i].first == InterestingVal || LHSVals[i].first == 0)
+        if (LHSVals[i].first == InterestingVal || LHSVals[i].first == 0) {
           Result.push_back(LHSVals[i]);
+          Result.back().first = InterestingVal;
+        }
       for (unsigned i = 0, e = RHSVals.size(); i != e; ++i)
-        if (RHSVals[i].first == InterestingVal || RHSVals[i].first == 0)
+        if (RHSVals[i].first == InterestingVal || RHSVals[i].first == 0) {
           Result.push_back(RHSVals[i]);
+          Result.back().first = InterestingVal;
+        }
       return !Result.empty();
     }
     
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
index fa820ed..3e03781 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopStrengthReduce.cpp
@@ -17,6 +17,40 @@
 // available on the target, and it performs a variety of other optimizations
 // related to loop induction variables.
 //
+// Terminology note: this code has a lot of handling for "post-increment" or
+// "post-inc" users. This is not talking about post-increment addressing modes;
+// it is instead talking about code like this:
+//
+//   %i = phi [ 0, %entry ], [ %i.next, %latch ]
+//   ...
+//   %i.next = add %i, 1
+//   %c = icmp eq %i.next, %n
+//
+// The SCEV for %i is {0,+,1}<%L>. The SCEV for %i.next is {1,+,1}<%L>, however
+// it's useful to think about these as the same register, with some uses using
+// the value of the register before the add and some using // it after. In this
+// example, the icmp is a post-increment user, since it uses %i.next, which is
+// the value of the induction variable after the increment. The other common
+// case of post-increment users is users outside the loop.
+//
+// TODO: More sophistication in the way Formulae are generated and filtered.
+//
+// TODO: Handle multiple loops at a time.
+//
+// TODO: Should TargetLowering::AddrMode::BaseGV be changed to a ConstantExpr
+//       instead of a GlobalValue?
+//
+// TODO: When truncation is free, truncate ICmp users' operands to make it a
+//       smaller encoding (on x86 at least).
+//
+// TODO: When a negated register is used by an add (such as in a list of
+//       multiple base registers, or as the increment expression in an addrec),
+//       we may not actually need both reg and (-1 * reg) in registers; the
+//       negation can be implemented by using a sub instead of an add. The
+//       lack of support for taking this into consideration when making
+//       register pressure decisions is partly worked around by the "Special"
+//       use kind.
+//
 //===----------------------------------------------------------------------===//
 
 #define DEBUG_TYPE "loop-reduce"
@@ -26,208 +60,401 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/DerivedTypes.h"
 #include "llvm/Analysis/IVUsers.h"
+#include "llvm/Analysis/Dominators.h"
 #include "llvm/Analysis/LoopPass.h"
 #include "llvm/Analysis/ScalarEvolutionExpander.h"
-#include "llvm/Transforms/Utils/AddrModeMatcher.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Local.h"
-#include "llvm/ADT/Statistic.h"
+#include "llvm/ADT/SmallBitVector.h"
+#include "llvm/ADT/SetVector.h"
+#include "llvm/ADT/DenseSet.h"
 #include "llvm/Support/Debug.h"
-#include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ValueHandle.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Target/TargetLowering.h"
 #include <algorithm>
 using namespace llvm;
 
-STATISTIC(NumReduced ,    "Number of IV uses strength reduced");
-STATISTIC(NumInserted,    "Number of PHIs inserted");
-STATISTIC(NumVariable,    "Number of PHIs with variable strides");
-STATISTIC(NumEliminated,  "Number of strides eliminated");
-STATISTIC(NumShadow,      "Number of Shadow IVs optimized");
-STATISTIC(NumImmSunk,     "Number of common expr immediates sunk into uses");
-STATISTIC(NumLoopCond,    "Number of loop terminating conds optimized");
-STATISTIC(NumCountZero,   "Number of count iv optimized to count toward zero");
+namespace {
+
+/// RegSortData - This class holds data which is used to order reuse candidates.
+class RegSortData {
+public:
+  /// UsedByIndices - This represents the set of LSRUse indices which reference
+  /// a particular register.
+  SmallBitVector UsedByIndices;
+
+  RegSortData() {}
+
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
 
-static cl::opt<bool> EnableFullLSRMode("enable-full-lsr",
-                                       cl::init(false),
-                                       cl::Hidden);
+}
+
+void RegSortData::print(raw_ostream &OS) const {
+  OS << "[NumUses=" << UsedByIndices.count() << ']';
+}
+
+void RegSortData::dump() const {
+  print(errs()); errs() << '\n';
+}
 
 namespace {
 
-  struct BasedUser;
+/// RegUseTracker - Map register candidates to information about how they are
+/// used.
+class RegUseTracker {
+  typedef DenseMap<const SCEV *, RegSortData> RegUsesTy;
 
-  /// IVInfo - This structure keeps track of one IV expression inserted during
-  /// StrengthReduceStridedIVUsers. It contains the stride, the common base, as
-  /// well as the PHI node and increment value created for rewrite.
-  struct IVExpr {
-    const SCEV *Stride;
-    const SCEV *Base;
-    PHINode    *PHI;
+  RegUsesTy RegUses;
+  SmallVector<const SCEV *, 16> RegSequence;
 
-    IVExpr(const SCEV *const stride, const SCEV *const base, PHINode *phi)
-      : Stride(stride), Base(base), PHI(phi) {}
-  };
+public:
+  void CountRegister(const SCEV *Reg, size_t LUIdx);
+
+  bool isRegUsedByUsesOtherThan(const SCEV *Reg, size_t LUIdx) const;
+
+  const SmallBitVector &getUsedByIndices(const SCEV *Reg) const;
+
+  void clear();
+
+  typedef SmallVectorImpl<const SCEV *>::iterator iterator;
+  typedef SmallVectorImpl<const SCEV *>::const_iterator const_iterator;
+  iterator begin() { return RegSequence.begin(); }
+  iterator end()   { return RegSequence.end(); }
+  const_iterator begin() const { return RegSequence.begin(); }
+  const_iterator end() const   { return RegSequence.end(); }
+};
+
+}
 
-  /// IVsOfOneStride - This structure keeps track of all IV expression inserted
-  /// during StrengthReduceStridedIVUsers for a particular stride of the IV.
-  struct IVsOfOneStride {
-    std::vector<IVExpr> IVs;
+void
+RegUseTracker::CountRegister(const SCEV *Reg, size_t LUIdx) {
+  std::pair<RegUsesTy::iterator, bool> Pair =
+    RegUses.insert(std::make_pair(Reg, RegSortData()));
+  RegSortData &RSD = Pair.first->second;
+  if (Pair.second)
+    RegSequence.push_back(Reg);
+  RSD.UsedByIndices.resize(std::max(RSD.UsedByIndices.size(), LUIdx + 1));
+  RSD.UsedByIndices.set(LUIdx);
+}
+
+bool
+RegUseTracker::isRegUsedByUsesOtherThan(const SCEV *Reg, size_t LUIdx) const {
+  if (!RegUses.count(Reg)) return false;
+  const SmallBitVector &UsedByIndices =
+    RegUses.find(Reg)->second.UsedByIndices;
+  int i = UsedByIndices.find_first();
+  if (i == -1) return false;
+  if ((size_t)i != LUIdx) return true;
+  return UsedByIndices.find_next(i) != -1;
+}
+
+const SmallBitVector &RegUseTracker::getUsedByIndices(const SCEV *Reg) const {
+  RegUsesTy::const_iterator I = RegUses.find(Reg);
+  assert(I != RegUses.end() && "Unknown register!");
+  return I->second.UsedByIndices;
+}
+
+void RegUseTracker::clear() {
+  RegUses.clear();
+  RegSequence.clear();
+}
+
+namespace {
+
+/// Formula - This class holds information that describes a formula for
+/// computing satisfying a use. It may include broken-out immediates and scaled
+/// registers.
+struct Formula {
+  /// AM - This is used to represent complex addressing, as well as other kinds
+  /// of interesting uses.
+  TargetLowering::AddrMode AM;
+
+  /// BaseRegs - The list of "base" registers for this use. When this is
+  /// non-empty, AM.HasBaseReg should be set to true.
+  SmallVector<const SCEV *, 2> BaseRegs;
 
-    void addIV(const SCEV *const Stride, const SCEV *const Base, PHINode *PHI) {
-      IVs.push_back(IVExpr(Stride, Base, PHI));
+  /// ScaledReg - The 'scaled' register for this use. This should be non-null
+  /// when AM.Scale is not zero.
+  const SCEV *ScaledReg;
+
+  Formula() : ScaledReg(0) {}
+
+  void InitialMatch(const SCEV *S, Loop *L,
+                    ScalarEvolution &SE, DominatorTree &DT);
+
+  unsigned getNumRegs() const;
+  const Type *getType() const;
+
+  bool referencesReg(const SCEV *S) const;
+  bool hasRegsUsedByUsesOtherThan(size_t LUIdx,
+                                  const RegUseTracker &RegUses) const;
+
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
+
+}
+
+/// DoInitialMatch - Recurrsion helper for InitialMatch.
+static void DoInitialMatch(const SCEV *S, Loop *L,
+                           SmallVectorImpl<const SCEV *> &Good,
+                           SmallVectorImpl<const SCEV *> &Bad,
+                           ScalarEvolution &SE, DominatorTree &DT) {
+  // Collect expressions which properly dominate the loop header.
+  if (S->properlyDominates(L->getHeader(), &DT)) {
+    Good.push_back(S);
+    return;
+  }
+
+  // Look at add operands.
+  if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
+    for (SCEVAddExpr::op_iterator I = Add->op_begin(), E = Add->op_end();
+         I != E; ++I)
+      DoInitialMatch(*I, L, Good, Bad, SE, DT);
+    return;
+  }
+
+  // Look at addrec operands.
+  if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S))
+    if (!AR->getStart()->isZero()) {
+      DoInitialMatch(AR->getStart(), L, Good, Bad, SE, DT);
+      DoInitialMatch(SE.getAddRecExpr(SE.getIntegerSCEV(0, AR->getType()),
+                                      AR->getStepRecurrence(SE),
+                                      AR->getLoop()),
+                     L, Good, Bad, SE, DT);
+      return;
     }
-  };
 
-  class LoopStrengthReduce : public LoopPass {
-    IVUsers *IU;
-    ScalarEvolution *SE;
-    bool Changed;
-
-    /// IVsByStride - Keep track of all IVs that have been inserted for a
-    /// particular stride.
-    std::map<const SCEV *, IVsOfOneStride> IVsByStride;
-
-    /// DeadInsts - Keep track of instructions we may have made dead, so that
-    /// we can remove them after we are done working.
-    SmallVector<WeakVH, 16> DeadInsts;
-
-    /// TLI - Keep a pointer of a TargetLowering to consult for determining
-    /// transformation profitability.
-    const TargetLowering *TLI;
-
-  public:
-    static char ID; // Pass ID, replacement for typeid
-    explicit LoopStrengthReduce(const TargetLowering *tli = NULL) :
-      LoopPass(&ID), TLI(tli) {}
-
-    bool runOnLoop(Loop *L, LPPassManager &LPM);
-
-    virtual void getAnalysisUsage(AnalysisUsage &AU) const {
-      // We split critical edges, so we change the CFG.  However, we do update
-      // many analyses if they are around.
-      AU.addPreservedID(LoopSimplifyID);
-      AU.addPreserved("loops");
-      AU.addPreserved("domfrontier");
-      AU.addPreserved("domtree");
-
-      AU.addRequiredID(LoopSimplifyID);
-      AU.addRequired<ScalarEvolution>();
-      AU.addPreserved<ScalarEvolution>();
-      AU.addRequired<IVUsers>();
-      AU.addPreserved<IVUsers>();
+  // Handle a multiplication by -1 (negation) if it didn't fold.
+  if (const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(S))
+    if (Mul->getOperand(0)->isAllOnesValue()) {
+      SmallVector<const SCEV *, 4> Ops(Mul->op_begin()+1, Mul->op_end());
+      const SCEV *NewMul = SE.getMulExpr(Ops);
+
+      SmallVector<const SCEV *, 4> MyGood;
+      SmallVector<const SCEV *, 4> MyBad;
+      DoInitialMatch(NewMul, L, MyGood, MyBad, SE, DT);
+      const SCEV *NegOne = SE.getSCEV(ConstantInt::getAllOnesValue(
+        SE.getEffectiveSCEVType(NewMul->getType())));
+      for (SmallVectorImpl<const SCEV *>::const_iterator I = MyGood.begin(),
+           E = MyGood.end(); I != E; ++I)
+        Good.push_back(SE.getMulExpr(NegOne, *I));
+      for (SmallVectorImpl<const SCEV *>::const_iterator I = MyBad.begin(),
+           E = MyBad.end(); I != E; ++I)
+        Bad.push_back(SE.getMulExpr(NegOne, *I));
+      return;
     }
 
-  private:
-    void OptimizeIndvars(Loop *L);
-
-    /// OptimizeLoopTermCond - Change loop terminating condition to use the
-    /// postinc iv when possible.
-    void OptimizeLoopTermCond(Loop *L);
-
-    /// OptimizeShadowIV - If IV is used in a int-to-float cast
-    /// inside the loop then try to eliminate the cast opeation.
-    void OptimizeShadowIV(Loop *L);
-
-    /// OptimizeMax - Rewrite the loop's terminating condition
-    /// if it uses a max computation.
-    ICmpInst *OptimizeMax(Loop *L, ICmpInst *Cond,
-                          IVStrideUse* &CondUse);
-
-    /// OptimizeLoopCountIV - If, after all sharing of IVs, the IV used for
-    /// deciding when to exit the loop is used only for that purpose, try to
-    /// rearrange things so it counts down to a test against zero.
-    bool OptimizeLoopCountIV(Loop *L);
-    bool OptimizeLoopCountIVOfStride(const SCEV* &Stride,
-                                     IVStrideUse* &CondUse, Loop *L);
-
-    /// StrengthReduceIVUsersOfStride - Strength reduce all of the users of a
-    /// single stride of IV.  All of the users may have different starting
-    /// values, and this may not be the only stride.
-    void StrengthReduceIVUsersOfStride(const SCEV *Stride,
-                                      IVUsersOfOneStride &Uses,
-                                      Loop *L);
-    void StrengthReduceIVUsers(Loop *L);
-
-    ICmpInst *ChangeCompareStride(Loop *L, ICmpInst *Cond,
-                                  IVStrideUse* &CondUse,
-                                  const SCEV* &CondStride,
-                                  bool PostPass = false);
-
-    bool FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse,
-                           const SCEV* &CondStride);
-    bool RequiresTypeConversion(const Type *Ty, const Type *NewTy);
-    const SCEV *CheckForIVReuse(bool, bool, bool, const SCEV *,
-                             IVExpr&, const Type*,
-                             const std::vector<BasedUser>& UsersToProcess);
-    bool ValidScale(bool, int64_t,
-                    const std::vector<BasedUser>& UsersToProcess);
-    bool ValidOffset(bool, int64_t, int64_t,
-                     const std::vector<BasedUser>& UsersToProcess);
-    const SCEV *CollectIVUsers(const SCEV *Stride,
-                              IVUsersOfOneStride &Uses,
-                              Loop *L,
-                              bool &AllUsesAreAddresses,
-                              bool &AllUsesAreOutsideLoop,
-                              std::vector<BasedUser> &UsersToProcess);
-    bool StrideMightBeShared(const SCEV *Stride, Loop *L, bool CheckPreInc);
-    bool ShouldUseFullStrengthReductionMode(
-                                const std::vector<BasedUser> &UsersToProcess,
-                                const Loop *L,
-                                bool AllUsesAreAddresses,
-                                const SCEV *Stride);
-    void PrepareToStrengthReduceFully(
-                             std::vector<BasedUser> &UsersToProcess,
-                             const SCEV *Stride,
-                             const SCEV *CommonExprs,
-                             const Loop *L,
-                             SCEVExpander &PreheaderRewriter);
-    void PrepareToStrengthReduceFromSmallerStride(
-                                         std::vector<BasedUser> &UsersToProcess,
-                                         Value *CommonBaseV,
-                                         const IVExpr &ReuseIV,
-                                         Instruction *PreInsertPt);
-    void PrepareToStrengthReduceWithNewPhi(
-                                  std::vector<BasedUser> &UsersToProcess,
-                                  const SCEV *Stride,
-                                  const SCEV *CommonExprs,
-                                  Value *CommonBaseV,
-                                  Instruction *IVIncInsertPt,
-                                  const Loop *L,
-                                  SCEVExpander &PreheaderRewriter);
-
-    void DeleteTriviallyDeadInstructions();
-  };
+  // Ok, we can't do anything interesting. Just stuff the whole thing into a
+  // register and hope for the best.
+  Bad.push_back(S);
 }
 
-char LoopStrengthReduce::ID = 0;
-static RegisterPass<LoopStrengthReduce>
-X("loop-reduce", "Loop Strength Reduction");
+/// InitialMatch - Incorporate loop-variant parts of S into this Formula,
+/// attempting to keep all loop-invariant and loop-computable values in a
+/// single base register.
+void Formula::InitialMatch(const SCEV *S, Loop *L,
+                           ScalarEvolution &SE, DominatorTree &DT) {
+  SmallVector<const SCEV *, 4> Good;
+  SmallVector<const SCEV *, 4> Bad;
+  DoInitialMatch(S, L, Good, Bad, SE, DT);
+  if (!Good.empty()) {
+    BaseRegs.push_back(SE.getAddExpr(Good));
+    AM.HasBaseReg = true;
+  }
+  if (!Bad.empty()) {
+    BaseRegs.push_back(SE.getAddExpr(Bad));
+    AM.HasBaseReg = true;
+  }
+}
 
-Pass *llvm::createLoopStrengthReducePass(const TargetLowering *TLI) {
-  return new LoopStrengthReduce(TLI);
+/// getNumRegs - Return the total number of register operands used by this
+/// formula. This does not include register uses implied by non-constant
+/// addrec strides.
+unsigned Formula::getNumRegs() const {
+  return !!ScaledReg + BaseRegs.size();
 }
 
-/// DeleteTriviallyDeadInstructions - If any of the instructions is the
-/// specified set are trivially dead, delete them and see if this makes any of
-/// their operands subsequently dead.
-void LoopStrengthReduce::DeleteTriviallyDeadInstructions() {
-  while (!DeadInsts.empty()) {
-    Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.pop_back_val());
+/// getType - Return the type of this formula, if it has one, or null
+/// otherwise. This type is meaningless except for the bit size.
+const Type *Formula::getType() const {
+  return !BaseRegs.empty() ? BaseRegs.front()->getType() :
+         ScaledReg ? ScaledReg->getType() :
+         AM.BaseGV ? AM.BaseGV->getType() :
+         0;
+}
 
-    if (I == 0 || !isInstructionTriviallyDead(I))
-      continue;
+/// referencesReg - Test if this formula references the given register.
+bool Formula::referencesReg(const SCEV *S) const {
+  return S == ScaledReg ||
+         std::find(BaseRegs.begin(), BaseRegs.end(), S) != BaseRegs.end();
+}
 
-    for (User::op_iterator OI = I->op_begin(), E = I->op_end(); OI != E; ++OI)
-      if (Instruction *U = dyn_cast<Instruction>(*OI)) {
-        *OI = 0;
-        if (U->use_empty())
-          DeadInsts.push_back(U);
+/// hasRegsUsedByUsesOtherThan - Test whether this formula uses registers
+/// which are used by uses other than the use with the given index.
+bool Formula::hasRegsUsedByUsesOtherThan(size_t LUIdx,
+                                         const RegUseTracker &RegUses) const {
+  if (ScaledReg)
+    if (RegUses.isRegUsedByUsesOtherThan(ScaledReg, LUIdx))
+      return true;
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = BaseRegs.begin(),
+       E = BaseRegs.end(); I != E; ++I)
+    if (RegUses.isRegUsedByUsesOtherThan(*I, LUIdx))
+      return true;
+  return false;
+}
+
+void Formula::print(raw_ostream &OS) const {
+  bool First = true;
+  if (AM.BaseGV) {
+    if (!First) OS << " + "; else First = false;
+    WriteAsOperand(OS, AM.BaseGV, /*PrintType=*/false);
+  }
+  if (AM.BaseOffs != 0) {
+    if (!First) OS << " + "; else First = false;
+    OS << AM.BaseOffs;
+  }
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = BaseRegs.begin(),
+       E = BaseRegs.end(); I != E; ++I) {
+    if (!First) OS << " + "; else First = false;
+    OS << "reg(" << **I << ')';
+  }
+  if (AM.Scale != 0) {
+    if (!First) OS << " + "; else First = false;
+    OS << AM.Scale << "*reg(";
+    if (ScaledReg)
+      OS << *ScaledReg;
+    else
+      OS << "<unknown>";
+    OS << ')';
+  }
+}
+
+void Formula::dump() const {
+  print(errs()); errs() << '\n';
+}
+
+/// getSDiv - Return an expression for LHS /s RHS, if it can be determined,
+/// or null otherwise. If IgnoreSignificantBits is true, expressions like
+/// (X * Y) /s Y are simplified to Y, ignoring that the multiplication may
+/// overflow, which is useful when the result will be used in a context where
+/// the most significant bits are ignored.
+static const SCEV *getSDiv(const SCEV *LHS, const SCEV *RHS,
+                           ScalarEvolution &SE,
+                           bool IgnoreSignificantBits = false) {
+  // Handle the trivial case, which works for any SCEV type.
+  if (LHS == RHS)
+    return SE.getIntegerSCEV(1, LHS->getType());
+
+  // Handle x /s -1 as x * -1, to give ScalarEvolution a chance to do some
+  // folding.
+  if (RHS->isAllOnesValue())
+    return SE.getMulExpr(LHS, RHS);
+
+  // Check for a division of a constant by a constant.
+  if (const SCEVConstant *C = dyn_cast<SCEVConstant>(LHS)) {
+    const SCEVConstant *RC = dyn_cast<SCEVConstant>(RHS);
+    if (!RC)
+      return 0;
+    if (C->getValue()->getValue().srem(RC->getValue()->getValue()) != 0)
+      return 0;
+    return SE.getConstant(C->getValue()->getValue()
+               .sdiv(RC->getValue()->getValue()));
+  }
+
+  // Distribute the sdiv over addrec operands.
+  if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(LHS)) {
+    const SCEV *Start = getSDiv(AR->getStart(), RHS, SE,
+                                IgnoreSignificantBits);
+    if (!Start) return 0;
+    const SCEV *Step = getSDiv(AR->getStepRecurrence(SE), RHS, SE,
+                               IgnoreSignificantBits);
+    if (!Step) return 0;
+    return SE.getAddRecExpr(Start, Step, AR->getLoop());
+  }
+
+  // Distribute the sdiv over add operands.
+  if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(LHS)) {
+    SmallVector<const SCEV *, 8> Ops;
+    for (SCEVAddExpr::op_iterator I = Add->op_begin(), E = Add->op_end();
+         I != E; ++I) {
+      const SCEV *Op = getSDiv(*I, RHS, SE,
+                               IgnoreSignificantBits);
+      if (!Op) return 0;
+      Ops.push_back(Op);
+    }
+    return SE.getAddExpr(Ops);
+  }
+
+  // Check for a multiply operand that we can pull RHS out of.
+  if (const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(LHS))
+    if (IgnoreSignificantBits || Mul->hasNoSignedWrap()) {
+      SmallVector<const SCEV *, 4> Ops;
+      bool Found = false;
+      for (SCEVMulExpr::op_iterator I = Mul->op_begin(), E = Mul->op_end();
+           I != E; ++I) {
+        if (!Found)
+          if (const SCEV *Q = getSDiv(*I, RHS, SE, IgnoreSignificantBits)) {
+            Ops.push_back(Q);
+            Found = true;
+            continue;
+          }
+        Ops.push_back(*I);
       }
+      return Found ? SE.getMulExpr(Ops) : 0;
+    }
 
-    I->eraseFromParent();
-    Changed = true;
+  // Otherwise we don't know.
+  return 0;
+}
+
+/// ExtractImmediate - If S involves the addition of a constant integer value,
+/// return that integer value, and mutate S to point to a new SCEV with that
+/// value excluded.
+static int64_t ExtractImmediate(const SCEV *&S, ScalarEvolution &SE) {
+  if (const SCEVConstant *C = dyn_cast<SCEVConstant>(S)) {
+    if (C->getValue()->getValue().getMinSignedBits() <= 64) {
+      S = SE.getIntegerSCEV(0, C->getType());
+      return C->getValue()->getSExtValue();
+    }
+  } else if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
+    SmallVector<const SCEV *, 8> NewOps(Add->op_begin(), Add->op_end());
+    int64_t Result = ExtractImmediate(NewOps.front(), SE);
+    S = SE.getAddExpr(NewOps);
+    return Result;
+  } else if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
+    SmallVector<const SCEV *, 8> NewOps(AR->op_begin(), AR->op_end());
+    int64_t Result = ExtractImmediate(NewOps.front(), SE);
+    S = SE.getAddRecExpr(NewOps, AR->getLoop());
+    return Result;
+  }
+  return 0;
+}
+
+/// ExtractSymbol - If S involves the addition of a GlobalValue address,
+/// return that symbol, and mutate S to point to a new SCEV with that
+/// value excluded.
+static GlobalValue *ExtractSymbol(const SCEV *&S, ScalarEvolution &SE) {
+  if (const SCEVUnknown *U = dyn_cast<SCEVUnknown>(S)) {
+    if (GlobalValue *GV = dyn_cast<GlobalValue>(U->getValue())) {
+      S = SE.getIntegerSCEV(0, GV->getType());
+      return GV;
+    }
+  } else if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
+    SmallVector<const SCEV *, 8> NewOps(Add->op_begin(), Add->op_end());
+    GlobalValue *Result = ExtractSymbol(NewOps.back(), SE);
+    S = SE.getAddExpr(NewOps);
+    return Result;
+  } else if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
+    SmallVector<const SCEV *, 8> NewOps(AR->op_begin(), AR->op_end());
+    GlobalValue *Result = ExtractSymbol(NewOps.front(), SE);
+    S = SE.getAddRecExpr(NewOps, AR->getLoop());
+    return Result;
   }
+  return 0;
 }
 
 /// isAddressUse - Returns true if the specified instruction is using the
@@ -276,1775 +503,832 @@ static const Type *getAccessType(const Instruction *Inst) {
       break;
     }
   }
-  return AccessTy;
-}
-
-namespace {
-  /// BasedUser - For a particular base value, keep information about how we've
-  /// partitioned the expression so far.
-  struct BasedUser {
-    /// Base - The Base value for the PHI node that needs to be inserted for
-    /// this use.  As the use is processed, information gets moved from this
-    /// field to the Imm field (below).  BasedUser values are sorted by this
-    /// field.
-    const SCEV *Base;
-
-    /// Inst - The instruction using the induction variable.
-    Instruction *Inst;
-
-    /// OperandValToReplace - The operand value of Inst to replace with the
-    /// EmittedBase.
-    Value *OperandValToReplace;
-
-    /// Imm - The immediate value that should be added to the base immediately
-    /// before Inst, because it will be folded into the imm field of the
-    /// instruction.  This is also sometimes used for loop-variant values that
-    /// must be added inside the loop.
-    const SCEV *Imm;
-
-    /// Phi - The induction variable that performs the striding that
-    /// should be used for this user.
-    PHINode *Phi;
-
-    // isUseOfPostIncrementedValue - True if this should use the
-    // post-incremented version of this IV, not the preincremented version.
-    // This can only be set in special cases, such as the terminating setcc
-    // instruction for a loop and uses outside the loop that are dominated by
-    // the loop.
-    bool isUseOfPostIncrementedValue;
-
-    BasedUser(IVStrideUse &IVSU, ScalarEvolution *se)
-      : Base(IVSU.getOffset()), Inst(IVSU.getUser()),
-        OperandValToReplace(IVSU.getOperandValToReplace()),
-        Imm(se->getIntegerSCEV(0, Base->getType())),
-        isUseOfPostIncrementedValue(IVSU.isUseOfPostIncrementedValue()) {}
-
-    // Once we rewrite the code to insert the new IVs we want, update the
-    // operands of Inst to use the new expression 'NewBase', with 'Imm' added
-    // to it.
-    void RewriteInstructionToUseNewBase(const SCEV *NewBase,
-                                        Instruction *InsertPt,
-                                       SCEVExpander &Rewriter, Loop *L, Pass *P,
-                                        SmallVectorImpl<WeakVH> &DeadInsts,
-                                        ScalarEvolution *SE);
-
-    Value *InsertCodeForBaseAtPosition(const SCEV *NewBase,
-                                       const Type *Ty,
-                                       SCEVExpander &Rewriter,
-                                       Instruction *IP,
-                                       ScalarEvolution *SE);
-    void dump() const;
-  };
-}
-
-void BasedUser::dump() const {
-  dbgs() << " Base=" << *Base;
-  dbgs() << " Imm=" << *Imm;
-  dbgs() << "   Inst: " << *Inst;
-}
-
-Value *BasedUser::InsertCodeForBaseAtPosition(const SCEV *NewBase,
-                                              const Type *Ty,
-                                              SCEVExpander &Rewriter,
-                                              Instruction *IP,
-                                              ScalarEvolution *SE) {
-  Value *Base = Rewriter.expandCodeFor(NewBase, 0, IP);
 
-  // Wrap the base in a SCEVUnknown so that ScalarEvolution doesn't try to
-  // re-analyze it.
-  const SCEV *NewValSCEV = SE->getUnknown(Base);
+  // All pointers have the same requirements, so canonicalize them to an
+  // arbitrary pointer type to minimize variation.
+  if (const PointerType *PTy = dyn_cast<PointerType>(AccessTy))
+    AccessTy = PointerType::get(IntegerType::get(PTy->getContext(), 1),
+                                PTy->getAddressSpace());
 
-  // Always emit the immediate into the same block as the user.
-  NewValSCEV = SE->getAddExpr(NewValSCEV, Imm);
-
-  return Rewriter.expandCodeFor(NewValSCEV, Ty, IP);
+  return AccessTy;
 }
 
+/// DeleteTriviallyDeadInstructions - If any of the instructions is the
+/// specified set are trivially dead, delete them and see if this makes any of
+/// their operands subsequently dead.
+static bool
+DeleteTriviallyDeadInstructions(SmallVectorImpl<WeakVH> &DeadInsts) {
+  bool Changed = false;
 
-// Once we rewrite the code to insert the new IVs we want, update the
-// operands of Inst to use the new expression 'NewBase', with 'Imm' added
-// to it. NewBasePt is the last instruction which contributes to the
-// value of NewBase in the case that it's a diffferent instruction from
-// the PHI that NewBase is computed from, or null otherwise.
-//
-void BasedUser::RewriteInstructionToUseNewBase(const SCEV *NewBase,
-                                               Instruction *NewBasePt,
-                                      SCEVExpander &Rewriter, Loop *L, Pass *P,
-                                      SmallVectorImpl<WeakVH> &DeadInsts,
-                                      ScalarEvolution *SE) {
-  if (!isa<PHINode>(Inst)) {
-    // By default, insert code at the user instruction.
-    BasicBlock::iterator InsertPt = Inst;
-
-    // However, if the Operand is itself an instruction, the (potentially
-    // complex) inserted code may be shared by many users.  Because of this, we
-    // want to emit code for the computation of the operand right before its old
-    // computation.  This is usually safe, because we obviously used to use the
-    // computation when it was computed in its current block.  However, in some
-    // cases (e.g. use of a post-incremented induction variable) the NewBase
-    // value will be pinned to live somewhere after the original computation.
-    // In this case, we have to back off.
-    //
-    // If this is a use outside the loop (which means after, since it is based
-    // on a loop indvar) we use the post-incremented value, so that we don't
-    // artificially make the preinc value live out the bottom of the loop.
-    if (!isUseOfPostIncrementedValue && L->contains(Inst)) {
-      if (NewBasePt && isa<PHINode>(OperandValToReplace)) {
-        InsertPt = NewBasePt;
-        ++InsertPt;
-      } else if (Instruction *OpInst
-                 = dyn_cast<Instruction>(OperandValToReplace)) {
-        InsertPt = OpInst;
-        while (isa<PHINode>(InsertPt)) ++InsertPt;
-      }
-    }
-    Value *NewVal = InsertCodeForBaseAtPosition(NewBase,
-                                                OperandValToReplace->getType(),
-                                                Rewriter, InsertPt, SE);
-    // Replace the use of the operand Value with the new Phi we just created.
-    Inst->replaceUsesOfWith(OperandValToReplace, NewVal);
-
-    DEBUG(dbgs() << "      Replacing with ");
-    DEBUG(WriteAsOperand(dbgs(), NewVal, /*PrintType=*/false));
-    DEBUG(dbgs() << ", which has value " << *NewBase << " plus IMM "
-                 << *Imm << "\n");
-    return;
-  }
+  while (!DeadInsts.empty()) {
+    Instruction *I = dyn_cast_or_null<Instruction>(DeadInsts.pop_back_val());
 
-  // PHI nodes are more complex.  We have to insert one copy of the NewBase+Imm
-  // expression into each operand block that uses it.  Note that PHI nodes can
-  // have multiple entries for the same predecessor.  We use a map to make sure
-  // that a PHI node only has a single Value* for each predecessor (which also
-  // prevents us from inserting duplicate code in some blocks).
-  DenseMap<BasicBlock*, Value*> InsertedCode;
-  PHINode *PN = cast<PHINode>(Inst);
-  for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
-    if (PN->getIncomingValue(i) == OperandValToReplace) {
-      // If the original expression is outside the loop, put the replacement
-      // code in the same place as the original expression,
-      // which need not be an immediate predecessor of this PHI.  This way we
-      // need only one copy of it even if it is referenced multiple times in
-      // the PHI.  We don't do this when the original expression is inside the
-      // loop because multiple copies sometimes do useful sinking of code in
-      // that case(?).
-      Instruction *OldLoc = dyn_cast<Instruction>(OperandValToReplace);
-      BasicBlock *PHIPred = PN->getIncomingBlock(i);
-      if (L->contains(OldLoc)) {
-        // If this is a critical edge, split the edge so that we do not insert
-        // the code on all predecessor/successor paths.  We do this unless this
-        // is the canonical backedge for this loop, as this can make some
-        // inserted code be in an illegal position.
-        if (e != 1 && PHIPred->getTerminator()->getNumSuccessors() > 1 &&
-            !isa<IndirectBrInst>(PHIPred->getTerminator()) &&
-            (PN->getParent() != L->getHeader() || !L->contains(PHIPred))) {
-
-          // First step, split the critical edge.
-          BasicBlock *NewBB = SplitCriticalEdge(PHIPred, PN->getParent(),
-                                                P, false);
-
-          // Next step: move the basic block.  In particular, if the PHI node
-          // is outside of the loop, and PredTI is in the loop, we want to
-          // move the block to be immediately before the PHI block, not
-          // immediately after PredTI.
-          if (L->contains(PHIPred) && !L->contains(PN))
-            NewBB->moveBefore(PN->getParent());
+    if (I == 0 || !isInstructionTriviallyDead(I))
+      continue;
 
-          // Splitting the edge can reduce the number of PHI entries we have.
-          e = PN->getNumIncomingValues();
-          PHIPred = NewBB;
-          i = PN->getBasicBlockIndex(PHIPred);
-        }
-      }
-      Value *&Code = InsertedCode[PHIPred];
-      if (!Code) {
-        // Insert the code into the end of the predecessor block.
-        Instruction *InsertPt = (L->contains(OldLoc)) ?
-                                PHIPred->getTerminator() :
-                                OldLoc->getParent()->getTerminator();
-        Code = InsertCodeForBaseAtPosition(NewBase, PN->getType(),
-                                           Rewriter, InsertPt, SE);
-
-        DEBUG(dbgs() << "      Changing PHI use to ");
-        DEBUG(WriteAsOperand(dbgs(), Code, /*PrintType=*/false));
-        DEBUG(dbgs() << ", which has value " << *NewBase << " plus IMM "
-                     << *Imm << "\n");
+    for (User::op_iterator OI = I->op_begin(), E = I->op_end(); OI != E; ++OI)
+      if (Instruction *U = dyn_cast<Instruction>(*OI)) {
+        *OI = 0;
+        if (U->use_empty())
+          DeadInsts.push_back(U);
       }
 
-      // Replace the use of the operand Value with the new Phi we just created.
-      PN->setIncomingValue(i, Code);
-      Rewriter.clear();
-    }
+    I->eraseFromParent();
+    Changed = true;
   }
 
-  // PHI node might have become a constant value after SplitCriticalEdge.
-  DeadInsts.push_back(Inst);
+  return Changed;
 }
 
+namespace {
 
-/// fitsInAddressMode - Return true if V can be subsumed within an addressing
-/// mode, and does not need to be put in a register first.
-static bool fitsInAddressMode(const SCEV *V, const Type *AccessTy,
-                             const TargetLowering *TLI, bool HasBaseReg) {
-  if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(V)) {
-    int64_t VC = SC->getValue()->getSExtValue();
-    if (TLI) {
-      TargetLowering::AddrMode AM;
-      AM.BaseOffs = VC;
-      AM.HasBaseReg = HasBaseReg;
-      return TLI->isLegalAddressingMode(AM, AccessTy);
-    } else {
-      // Defaults to PPC. PPC allows a sign-extended 16-bit immediate field.
-      return (VC > -(1 << 16) && VC < (1 << 16)-1);
-    }
-  }
-
-  if (const SCEVUnknown *SU = dyn_cast<SCEVUnknown>(V))
-    if (GlobalValue *GV = dyn_cast<GlobalValue>(SU->getValue())) {
-      if (TLI) {
-        TargetLowering::AddrMode AM;
-        AM.BaseGV = GV;
-        AM.HasBaseReg = HasBaseReg;
-        return TLI->isLegalAddressingMode(AM, AccessTy);
-      } else {
-        // Default: assume global addresses are not legal.
-      }
-    }
+/// Cost - This class is used to measure and compare candidate formulae.
+class Cost {
+  /// TODO: Some of these could be merged. Also, a lexical ordering
+  /// isn't always optimal.
+  unsigned NumRegs;
+  unsigned AddRecCost;
+  unsigned NumIVMuls;
+  unsigned NumBaseAdds;
+  unsigned ImmCost;
+  unsigned SetupCost;
+
+public:
+  Cost()
+    : NumRegs(0), AddRecCost(0), NumIVMuls(0), NumBaseAdds(0), ImmCost(0),
+      SetupCost(0) {}
+
+  unsigned getNumRegs() const { return NumRegs; }
+
+  bool operator<(const Cost &Other) const;
+
+  void Loose();
+
+  void RateFormula(const Formula &F,
+                   SmallPtrSet<const SCEV *, 16> &Regs,
+                   const DenseSet<const SCEV *> &VisitedRegs,
+                   const Loop *L,
+                   const SmallVectorImpl<int64_t> &Offsets,
+                   ScalarEvolution &SE, DominatorTree &DT);
+
+  void print(raw_ostream &OS) const;
+  void dump() const;
+
+private:
+  void RateRegister(const SCEV *Reg,
+                    SmallPtrSet<const SCEV *, 16> &Regs,
+                    const Loop *L,
+                    ScalarEvolution &SE, DominatorTree &DT);
+  void RatePrimaryRegister(const SCEV *Reg,
+                           SmallPtrSet<const SCEV *, 16> &Regs,
+                           const Loop *L,
+                           ScalarEvolution &SE, DominatorTree &DT);
+};
 
-  return false;
 }
 
-/// MoveLoopVariantsToImmediateField - Move any subexpressions from Val that are
-/// loop varying to the Imm operand.
-static void MoveLoopVariantsToImmediateField(const SCEV *&Val, const SCEV *&Imm,
-                                             Loop *L, ScalarEvolution *SE) {
-  if (Val->isLoopInvariant(L)) return;  // Nothing to do.
-
-  if (const SCEVAddExpr *SAE = dyn_cast<SCEVAddExpr>(Val)) {
-    SmallVector<const SCEV *, 4> NewOps;
-    NewOps.reserve(SAE->getNumOperands());
+/// RateRegister - Tally up interesting quantities from the given register.
+void Cost::RateRegister(const SCEV *Reg,
+                        SmallPtrSet<const SCEV *, 16> &Regs,
+                        const Loop *L,
+                        ScalarEvolution &SE, DominatorTree &DT) {
+  if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(Reg)) {
+    if (AR->getLoop() == L)
+      AddRecCost += 1; /// TODO: This should be a function of the stride.
+
+    // If this is an addrec for a loop that's already been visited by LSR,
+    // don't second-guess its addrec phi nodes. LSR isn't currently smart
+    // enough to reason about more than one loop at a time. Consider these
+    // registers free and leave them alone.
+    else if (L->contains(AR->getLoop()) ||
+             (!AR->getLoop()->contains(L) &&
+              DT.dominates(L->getHeader(), AR->getLoop()->getHeader()))) {
+      for (BasicBlock::iterator I = AR->getLoop()->getHeader()->begin();
+           PHINode *PN = dyn_cast<PHINode>(I); ++I)
+        if (SE.isSCEVable(PN->getType()) &&
+            (SE.getEffectiveSCEVType(PN->getType()) ==
+             SE.getEffectiveSCEVType(AR->getType())) &&
+            SE.getSCEV(PN) == AR)
+          return;
 
-    for (unsigned i = 0; i != SAE->getNumOperands(); ++i)
-      if (!SAE->getOperand(i)->isLoopInvariant(L)) {
-        // If this is a loop-variant expression, it must stay in the immediate
-        // field of the expression.
-        Imm = SE->getAddExpr(Imm, SAE->getOperand(i));
-      } else {
-        NewOps.push_back(SAE->getOperand(i));
-      }
+      // If this isn't one of the addrecs that the loop already has, it
+      // would require a costly new phi and add. TODO: This isn't
+      // precisely modeled right now.
+      ++NumBaseAdds;
+      if (!Regs.count(AR->getStart()))
+        RateRegister(AR->getStart(), Regs, L, SE, DT);
+    }
 
-    if (NewOps.empty())
-      Val = SE->getIntegerSCEV(0, Val->getType());
-    else
-      Val = SE->getAddExpr(NewOps);
-  } else if (const SCEVAddRecExpr *SARE = dyn_cast<SCEVAddRecExpr>(Val)) {
-    // Try to pull immediates out of the start value of nested addrec's.
-    const SCEV *Start = SARE->getStart();
-    MoveLoopVariantsToImmediateField(Start, Imm, L, SE);
-
-    SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
-    Ops[0] = Start;
-    Val = SE->getAddRecExpr(Ops, SARE->getLoop());
-  } else {
-    // Otherwise, all of Val is variant, move the whole thing over.
-    Imm = SE->getAddExpr(Imm, Val);
-    Val = SE->getIntegerSCEV(0, Val->getType());
+    // Add the step value register, if it needs one.
+    // TODO: The non-affine case isn't precisely modeled here.
+    if (!AR->isAffine() || !isa<SCEVConstant>(AR->getOperand(1)))
+      if (!Regs.count(AR->getStart()))
+        RateRegister(AR->getOperand(1), Regs, L, SE, DT);
   }
+  ++NumRegs;
+
+  // Rough heuristic; favor registers which don't require extra setup
+  // instructions in the preheader.
+  if (!isa<SCEVUnknown>(Reg) &&
+      !isa<SCEVConstant>(Reg) &&
+      !(isa<SCEVAddRecExpr>(Reg) &&
+        (isa<SCEVUnknown>(cast<SCEVAddRecExpr>(Reg)->getStart()) ||
+         isa<SCEVConstant>(cast<SCEVAddRecExpr>(Reg)->getStart()))))
+    ++SetupCost;
 }
 
+/// RatePrimaryRegister - Record this register in the set. If we haven't seen it
+/// before, rate it.
+void Cost::RatePrimaryRegister(const SCEV *Reg,
+                         SmallPtrSet<const SCEV *, 16> &Regs,
+                         const Loop *L,
+                         ScalarEvolution &SE, DominatorTree &DT) {
+  if (Regs.insert(Reg))
+    RateRegister(Reg, Regs, L, SE, DT);
+}
 
-/// MoveImmediateValues - Look at Val, and pull out any additions of constants
-/// that can fit into the immediate field of instructions in the target.
-/// Accumulate these immediate values into the Imm value.
-static void MoveImmediateValues(const TargetLowering *TLI,
-                                const Type *AccessTy,
-                                const SCEV *&Val, const SCEV *&Imm,
-                                bool isAddress, Loop *L,
-                                ScalarEvolution *SE) {
-  if (const SCEVAddExpr *SAE = dyn_cast<SCEVAddExpr>(Val)) {
-    SmallVector<const SCEV *, 4> NewOps;
-    NewOps.reserve(SAE->getNumOperands());
-
-    for (unsigned i = 0; i != SAE->getNumOperands(); ++i) {
-      const SCEV *NewOp = SAE->getOperand(i);
-      MoveImmediateValues(TLI, AccessTy, NewOp, Imm, isAddress, L, SE);
-
-      if (!NewOp->isLoopInvariant(L)) {
-        // If this is a loop-variant expression, it must stay in the immediate
-        // field of the expression.
-        Imm = SE->getAddExpr(Imm, NewOp);
-      } else {
-        NewOps.push_back(NewOp);
-      }
-    }
-
-    if (NewOps.empty())
-      Val = SE->getIntegerSCEV(0, Val->getType());
-    else
-      Val = SE->getAddExpr(NewOps);
-    return;
-  } else if (const SCEVAddRecExpr *SARE = dyn_cast<SCEVAddRecExpr>(Val)) {
-    // Try to pull immediates out of the start value of nested addrec's.
-    const SCEV *Start = SARE->getStart();
-    MoveImmediateValues(TLI, AccessTy, Start, Imm, isAddress, L, SE);
-
-    if (Start != SARE->getStart()) {
-      SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
-      Ops[0] = Start;
-      Val = SE->getAddRecExpr(Ops, SARE->getLoop());
+void Cost::RateFormula(const Formula &F,
+                       SmallPtrSet<const SCEV *, 16> &Regs,
+                       const DenseSet<const SCEV *> &VisitedRegs,
+                       const Loop *L,
+                       const SmallVectorImpl<int64_t> &Offsets,
+                       ScalarEvolution &SE, DominatorTree &DT) {
+  // Tally up the registers.
+  if (const SCEV *ScaledReg = F.ScaledReg) {
+    if (VisitedRegs.count(ScaledReg)) {
+      Loose();
+      return;
     }
-    return;
-  } else if (const SCEVMulExpr *SME = dyn_cast<SCEVMulExpr>(Val)) {
-    // Transform "8 * (4 + v)" -> "32 + 8*V" if "32" fits in the immed field.
-    if (isAddress &&
-        fitsInAddressMode(SME->getOperand(0), AccessTy, TLI, false) &&
-        SME->getNumOperands() == 2 && SME->isLoopInvariant(L)) {
-
-      const SCEV *SubImm = SE->getIntegerSCEV(0, Val->getType());
-      const SCEV *NewOp = SME->getOperand(1);
-      MoveImmediateValues(TLI, AccessTy, NewOp, SubImm, isAddress, L, SE);
-
-      // If we extracted something out of the subexpressions, see if we can
-      // simplify this!
-      if (NewOp != SME->getOperand(1)) {
-        // Scale SubImm up by "8".  If the result is a target constant, we are
-        // good.
-        SubImm = SE->getMulExpr(SubImm, SME->getOperand(0));
-        if (fitsInAddressMode(SubImm, AccessTy, TLI, false)) {
-          // Accumulate the immediate.
-          Imm = SE->getAddExpr(Imm, SubImm);
-
-          // Update what is left of 'Val'.
-          Val = SE->getMulExpr(SME->getOperand(0), NewOp);
-          return;
-        }
-      }
+    RatePrimaryRegister(ScaledReg, Regs, L, SE, DT);
+  }
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = F.BaseRegs.begin(),
+       E = F.BaseRegs.end(); I != E; ++I) {
+    const SCEV *BaseReg = *I;
+    if (VisitedRegs.count(BaseReg)) {
+      Loose();
+      return;
     }
+    RatePrimaryRegister(BaseReg, Regs, L, SE, DT);
+
+    NumIVMuls += isa<SCEVMulExpr>(BaseReg) &&
+                 BaseReg->hasComputableLoopEvolution(L);
   }
 
-  // Loop-variant expressions must stay in the immediate field of the
-  // expression.
-  if ((isAddress && fitsInAddressMode(Val, AccessTy, TLI, false)) ||
-      !Val->isLoopInvariant(L)) {
-    Imm = SE->getAddExpr(Imm, Val);
-    Val = SE->getIntegerSCEV(0, Val->getType());
-    return;
+  if (F.BaseRegs.size() > 1)
+    NumBaseAdds += F.BaseRegs.size() - 1;
+
+  // Tally up the non-zero immediates.
+  for (SmallVectorImpl<int64_t>::const_iterator I = Offsets.begin(),
+       E = Offsets.end(); I != E; ++I) {
+    int64_t Offset = (uint64_t)*I + F.AM.BaseOffs;
+    if (F.AM.BaseGV)
+      ImmCost += 64; // Handle symbolic values conservatively.
+                     // TODO: This should probably be the pointer size.
+    else if (Offset != 0)
+      ImmCost += APInt(64, Offset, true).getMinSignedBits();
   }
+}
 
-  // Otherwise, no immediates to move.
+/// Loose - Set this cost to a loosing value.
+void Cost::Loose() {
+  NumRegs = ~0u;
+  AddRecCost = ~0u;
+  NumIVMuls = ~0u;
+  NumBaseAdds = ~0u;
+  ImmCost = ~0u;
+  SetupCost = ~0u;
 }
 
-static void MoveImmediateValues(const TargetLowering *TLI,
-                                Instruction *User,
-                                const SCEV *&Val, const SCEV *&Imm,
-                                bool isAddress, Loop *L,
-                                ScalarEvolution *SE) {
-  const Type *AccessTy = getAccessType(User);
-  MoveImmediateValues(TLI, AccessTy, Val, Imm, isAddress, L, SE);
+/// operator< - Choose the lower cost.
+bool Cost::operator<(const Cost &Other) const {
+  if (NumRegs != Other.NumRegs)
+    return NumRegs < Other.NumRegs;
+  if (AddRecCost != Other.AddRecCost)
+    return AddRecCost < Other.AddRecCost;
+  if (NumIVMuls != Other.NumIVMuls)
+    return NumIVMuls < Other.NumIVMuls;
+  if (NumBaseAdds != Other.NumBaseAdds)
+    return NumBaseAdds < Other.NumBaseAdds;
+  if (ImmCost != Other.ImmCost)
+    return ImmCost < Other.ImmCost;
+  if (SetupCost != Other.SetupCost)
+    return SetupCost < Other.SetupCost;
+  return false;
 }
 
-/// SeparateSubExprs - Decompose Expr into all of the subexpressions that are
-/// added together.  This is used to reassociate common addition subexprs
-/// together for maximal sharing when rewriting bases.
-static void SeparateSubExprs(SmallVector<const SCEV *, 16> &SubExprs,
-                             const SCEV *Expr,
-                             ScalarEvolution *SE) {
-  if (const SCEVAddExpr *AE = dyn_cast<SCEVAddExpr>(Expr)) {
-    for (unsigned j = 0, e = AE->getNumOperands(); j != e; ++j)
-      SeparateSubExprs(SubExprs, AE->getOperand(j), SE);
-  } else if (const SCEVAddRecExpr *SARE = dyn_cast<SCEVAddRecExpr>(Expr)) {
-    const SCEV *Zero = SE->getIntegerSCEV(0, Expr->getType());
-    if (SARE->getOperand(0) == Zero) {
-      SubExprs.push_back(Expr);
-    } else {
-      // Compute the addrec with zero as its base.
-      SmallVector<const SCEV *, 4> Ops(SARE->op_begin(), SARE->op_end());
-      Ops[0] = Zero;   // Start with zero base.
-      SubExprs.push_back(SE->getAddRecExpr(Ops, SARE->getLoop()));
+void Cost::print(raw_ostream &OS) const {
+  OS << NumRegs << " reg" << (NumRegs == 1 ? "" : "s");
+  if (AddRecCost != 0)
+    OS << ", with addrec cost " << AddRecCost;
+  if (NumIVMuls != 0)
+    OS << ", plus " << NumIVMuls << " IV mul" << (NumIVMuls == 1 ? "" : "s");
+  if (NumBaseAdds != 0)
+    OS << ", plus " << NumBaseAdds << " base add"
+       << (NumBaseAdds == 1 ? "" : "s");
+  if (ImmCost != 0)
+    OS << ", plus " << ImmCost << " imm cost";
+  if (SetupCost != 0)
+    OS << ", plus " << SetupCost << " setup cost";
+}
 
+void Cost::dump() const {
+  print(errs()); errs() << '\n';
+}
 
-      SeparateSubExprs(SubExprs, SARE->getOperand(0), SE);
-    }
-  } else if (!Expr->isZero()) {
-    // Do not add zero.
-    SubExprs.push_back(Expr);
-  }
-}
-
-// This is logically local to the following function, but C++ says we have
-// to make it file scope.
-struct SubExprUseData { unsigned Count; bool notAllUsesAreFree; };
-
-/// RemoveCommonExpressionsFromUseBases - Look through all of the Bases of all
-/// the Uses, removing any common subexpressions, except that if all such
-/// subexpressions can be folded into an addressing mode for all uses inside
-/// the loop (this case is referred to as "free" in comments herein) we do
-/// not remove anything.  This looks for things like (a+b+c) and
-/// (a+c+d) and computes the common (a+c) subexpression.  The common expression
-/// is *removed* from the Bases and returned.
-static const SCEV *
-RemoveCommonExpressionsFromUseBases(std::vector<BasedUser> &Uses,
-                                    ScalarEvolution *SE, Loop *L,
-                                    const TargetLowering *TLI) {
-  unsigned NumUses = Uses.size();
-
-  // Only one use?  This is a very common case, so we handle it specially and
-  // cheaply.
-  const SCEV *Zero = SE->getIntegerSCEV(0, Uses[0].Base->getType());
-  const SCEV *Result = Zero;
-  const SCEV *FreeResult = Zero;
-  if (NumUses == 1) {
-    // If the use is inside the loop, use its base, regardless of what it is:
-    // it is clearly shared across all the IV's.  If the use is outside the loop
-    // (which means after it) we don't want to factor anything *into* the loop,
-    // so just use 0 as the base.
-    if (L->contains(Uses[0].Inst))
-      std::swap(Result, Uses[0].Base);
-    return Result;
-  }
+namespace {
 
-  // To find common subexpressions, count how many of Uses use each expression.
-  // If any subexpressions are used Uses.size() times, they are common.
-  // Also track whether all uses of each expression can be moved into an
-  // an addressing mode "for free"; such expressions are left within the loop.
-  // struct SubExprUseData { unsigned Count; bool notAllUsesAreFree; };
-  std::map<const SCEV *, SubExprUseData> SubExpressionUseData;
-
-  // UniqueSubExprs - Keep track of all of the subexpressions we see in the
-  // order we see them.
-  SmallVector<const SCEV *, 16> UniqueSubExprs;
-
-  SmallVector<const SCEV *, 16> SubExprs;
-  unsigned NumUsesInsideLoop = 0;
-  for (unsigned i = 0; i != NumUses; ++i) {
-    // If the user is outside the loop, just ignore it for base computation.
-    // Since the user is outside the loop, it must be *after* the loop (if it
-    // were before, it could not be based on the loop IV).  We don't want users
-    // after the loop to affect base computation of values *inside* the loop,
-    // because we can always add their offsets to the result IV after the loop
-    // is done, ensuring we get good code inside the loop.
-    if (!L->contains(Uses[i].Inst))
-      continue;
-    NumUsesInsideLoop++;
+/// LSRFixup - An operand value in an instruction which is to be replaced
+/// with some equivalent, possibly strength-reduced, replacement.
+struct LSRFixup {
+  /// UserInst - The instruction which will be updated.
+  Instruction *UserInst;
 
-    // If the base is zero (which is common), return zero now, there are no
-    // CSEs we can find.
-    if (Uses[i].Base == Zero) return Zero;
+  /// OperandValToReplace - The operand of the instruction which will
+  /// be replaced. The operand may be used more than once; every instance
+  /// will be replaced.
+  Value *OperandValToReplace;
 
-    // If this use is as an address we may be able to put CSEs in the addressing
-    // mode rather than hoisting them.
-    bool isAddrUse = isAddressUse(Uses[i].Inst, Uses[i].OperandValToReplace);
-    // We may need the AccessTy below, but only when isAddrUse, so compute it
-    // only in that case.
-    const Type *AccessTy = 0;
-    if (isAddrUse)
-      AccessTy = getAccessType(Uses[i].Inst);
-
-    // Split the expression into subexprs.
-    SeparateSubExprs(SubExprs, Uses[i].Base, SE);
-    // Add one to SubExpressionUseData.Count for each subexpr present, and
-    // if the subexpr is not a valid immediate within an addressing mode use,
-    // set SubExpressionUseData.notAllUsesAreFree.  We definitely want to
-    // hoist these out of the loop (if they are common to all uses).
-    for (unsigned j = 0, e = SubExprs.size(); j != e; ++j) {
-      if (++SubExpressionUseData[SubExprs[j]].Count == 1)
-        UniqueSubExprs.push_back(SubExprs[j]);
-      if (!isAddrUse || !fitsInAddressMode(SubExprs[j], AccessTy, TLI, false))
-        SubExpressionUseData[SubExprs[j]].notAllUsesAreFree = true;
-    }
-    SubExprs.clear();
-  }
-
-  // Now that we know how many times each is used, build Result.  Iterate over
-  // UniqueSubexprs so that we have a stable ordering.
-  for (unsigned i = 0, e = UniqueSubExprs.size(); i != e; ++i) {
-    std::map<const SCEV *, SubExprUseData>::iterator I =
-       SubExpressionUseData.find(UniqueSubExprs[i]);
-    assert(I != SubExpressionUseData.end() && "Entry not found?");
-    if (I->second.Count == NumUsesInsideLoop) { // Found CSE!
-      if (I->second.notAllUsesAreFree)
-        Result = SE->getAddExpr(Result, I->first);
-      else
-        FreeResult = SE->getAddExpr(FreeResult, I->first);
-    } else
-      // Remove non-cse's from SubExpressionUseData.
-      SubExpressionUseData.erase(I);
-  }
-
-  if (FreeResult != Zero) {
-    // We have some subexpressions that can be subsumed into addressing
-    // modes in every use inside the loop.  However, it's possible that
-    // there are so many of them that the combined FreeResult cannot
-    // be subsumed, or that the target cannot handle both a FreeResult
-    // and a Result in the same instruction (for example because it would
-    // require too many registers).  Check this.
-    for (unsigned i=0; i<NumUses; ++i) {
-      if (!L->contains(Uses[i].Inst))
-        continue;
-      // We know this is an addressing mode use; if there are any uses that
-      // are not, FreeResult would be Zero.
-      const Type *AccessTy = getAccessType(Uses[i].Inst);
-      if (!fitsInAddressMode(FreeResult, AccessTy, TLI, Result!=Zero)) {
-        // FIXME:  could split up FreeResult into pieces here, some hoisted
-        // and some not.  There is no obvious advantage to this.
-        Result = SE->getAddExpr(Result, FreeResult);
-        FreeResult = Zero;
-        break;
-      }
-    }
-  }
+  /// PostIncLoop - If this user is to use the post-incremented value of an
+  /// induction variable, this variable is non-null and holds the loop
+  /// associated with the induction variable.
+  const Loop *PostIncLoop;
 
-  // If we found no CSE's, return now.
-  if (Result == Zero) return Result;
+  /// LUIdx - The index of the LSRUse describing the expression which
+  /// this fixup needs, minus an offset (below).
+  size_t LUIdx;
 
-  // If we still have a FreeResult, remove its subexpressions from
-  // SubExpressionUseData.  This means they will remain in the use Bases.
-  if (FreeResult != Zero) {
-    SeparateSubExprs(SubExprs, FreeResult, SE);
-    for (unsigned j = 0, e = SubExprs.size(); j != e; ++j) {
-      std::map<const SCEV *, SubExprUseData>::iterator I =
-         SubExpressionUseData.find(SubExprs[j]);
-      SubExpressionUseData.erase(I);
-    }
-    SubExprs.clear();
-  }
+  /// Offset - A constant offset to be added to the LSRUse expression.
+  /// This allows multiple fixups to share the same LSRUse with different
+  /// offsets, for example in an unrolled loop.
+  int64_t Offset;
 
-  // Otherwise, remove all of the CSE's we found from each of the base values.
-  for (unsigned i = 0; i != NumUses; ++i) {
-    // Uses outside the loop don't necessarily include the common base, but
-    // the final IV value coming into those uses does.  Instead of trying to
-    // remove the pieces of the common base, which might not be there,
-    // subtract off the base to compensate for this.
-    if (!L->contains(Uses[i].Inst)) {
-      Uses[i].Base = SE->getMinusSCEV(Uses[i].Base, Result);
-      continue;
-    }
+  LSRFixup();
 
-    // Split the expression into subexprs.
-    SeparateSubExprs(SubExprs, Uses[i].Base, SE);
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
 
-    // Remove any common subexpressions.
-    for (unsigned j = 0, e = SubExprs.size(); j != e; ++j)
-      if (SubExpressionUseData.count(SubExprs[j])) {
-        SubExprs.erase(SubExprs.begin()+j);
-        --j; --e;
-      }
+}
 
-    // Finally, add the non-shared expressions together.
-    if (SubExprs.empty())
-      Uses[i].Base = Zero;
-    else
-      Uses[i].Base = SE->getAddExpr(SubExprs);
-    SubExprs.clear();
+LSRFixup::LSRFixup()
+  : UserInst(0), OperandValToReplace(0), PostIncLoop(0),
+    LUIdx(~size_t(0)), Offset(0) {}
+
+void LSRFixup::print(raw_ostream &OS) const {
+  OS << "UserInst=";
+  // Store is common and interesting enough to be worth special-casing.
+  if (StoreInst *Store = dyn_cast<StoreInst>(UserInst)) {
+    OS << "store ";
+    WriteAsOperand(OS, Store->getOperand(0), /*PrintType=*/false);
+  } else if (UserInst->getType()->isVoidTy())
+    OS << UserInst->getOpcodeName();
+  else
+    WriteAsOperand(OS, UserInst, /*PrintType=*/false);
+
+  OS << ", OperandValToReplace=";
+  WriteAsOperand(OS, OperandValToReplace, /*PrintType=*/false);
+
+  if (PostIncLoop) {
+    OS << ", PostIncLoop=";
+    WriteAsOperand(OS, PostIncLoop->getHeader(), /*PrintType=*/false);
   }
 
-  return Result;
-}
+  if (LUIdx != ~size_t(0))
+    OS << ", LUIdx=" << LUIdx;
 
-/// ValidScale - Check whether the given Scale is valid for all loads and
-/// stores in UsersToProcess.
-///
-bool LoopStrengthReduce::ValidScale(bool HasBaseReg, int64_t Scale,
-                               const std::vector<BasedUser>& UsersToProcess) {
-  if (!TLI)
-    return true;
+  if (Offset != 0)
+    OS << ", Offset=" << Offset;
+}
 
-  for (unsigned i = 0, e = UsersToProcess.size(); i!=e; ++i) {
-    // If this is a load or other access, pass the type of the access in.
-    const Type *AccessTy =
-        Type::getVoidTy(UsersToProcess[i].Inst->getContext());
-    if (isAddressUse(UsersToProcess[i].Inst,
-                     UsersToProcess[i].OperandValToReplace))
-      AccessTy = getAccessType(UsersToProcess[i].Inst);
-    else if (isa<PHINode>(UsersToProcess[i].Inst))
-      continue;
+void LSRFixup::dump() const {
+  print(errs()); errs() << '\n';
+}
 
-    TargetLowering::AddrMode AM;
-    if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(UsersToProcess[i].Imm))
-      AM.BaseOffs = SC->getValue()->getSExtValue();
-    AM.HasBaseReg = HasBaseReg || !UsersToProcess[i].Base->isZero();
-    AM.Scale = Scale;
+namespace {
 
-    // If load[imm+r*scale] is illegal, bail out.
-    if (!TLI->isLegalAddressingMode(AM, AccessTy))
-      return false;
+/// UniquifierDenseMapInfo - A DenseMapInfo implementation for holding
+/// DenseMaps and DenseSets of sorted SmallVectors of const SCEV*.
+struct UniquifierDenseMapInfo {
+  static SmallVector<const SCEV *, 2> getEmptyKey() {
+    SmallVector<const SCEV *, 2> V;
+    V.push_back(reinterpret_cast<const SCEV *>(-1));
+    return V;
   }
-  return true;
-}
-
-/// ValidOffset - Check whether the given Offset is valid for all loads and
-/// stores in UsersToProcess.
-///
-bool LoopStrengthReduce::ValidOffset(bool HasBaseReg,
-                               int64_t Offset,
-                               int64_t Scale,
-                               const std::vector<BasedUser>& UsersToProcess) {
-  if (!TLI)
-    return true;
 
-  for (unsigned i=0, e = UsersToProcess.size(); i!=e; ++i) {
-    // If this is a load or other access, pass the type of the access in.
-    const Type *AccessTy =
-        Type::getVoidTy(UsersToProcess[i].Inst->getContext());
-    if (isAddressUse(UsersToProcess[i].Inst,
-                     UsersToProcess[i].OperandValToReplace))
-      AccessTy = getAccessType(UsersToProcess[i].Inst);
-    else if (isa<PHINode>(UsersToProcess[i].Inst))
-      continue;
+  static SmallVector<const SCEV *, 2> getTombstoneKey() {
+    SmallVector<const SCEV *, 2> V;
+    V.push_back(reinterpret_cast<const SCEV *>(-2));
+    return V;
+  }
 
-    TargetLowering::AddrMode AM;
-    if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(UsersToProcess[i].Imm))
-      AM.BaseOffs = SC->getValue()->getSExtValue();
-    AM.BaseOffs = (uint64_t)AM.BaseOffs + (uint64_t)Offset;
-    AM.HasBaseReg = HasBaseReg || !UsersToProcess[i].Base->isZero();
-    AM.Scale = Scale;
+  static unsigned getHashValue(const SmallVector<const SCEV *, 2> &V) {
+    unsigned Result = 0;
+    for (SmallVectorImpl<const SCEV *>::const_iterator I = V.begin(),
+         E = V.end(); I != E; ++I)
+      Result ^= DenseMapInfo<const SCEV *>::getHashValue(*I);
+    return Result;
+  }
 
-    // If load[imm+r*scale] is illegal, bail out.
-    if (!TLI->isLegalAddressingMode(AM, AccessTy))
-      return false;
+  static bool isEqual(const SmallVector<const SCEV *, 2> &LHS,
+                      const SmallVector<const SCEV *, 2> &RHS) {
+    return LHS == RHS;
   }
-  return true;
-}
+};
+
+/// LSRUse - This class holds the state that LSR keeps for each use in
+/// IVUsers, as well as uses invented by LSR itself. It includes information
+/// about what kinds of things can be folded into the user, information about
+/// the user itself, and information about how the use may be satisfied.
+/// TODO: Represent multiple users of the same expression in common?
+class LSRUse {
+  DenseSet<SmallVector<const SCEV *, 2>, UniquifierDenseMapInfo> Uniquifier;
+
+public:
+  /// KindType - An enum for a kind of use, indicating what types of
+  /// scaled and immediate operands it might support.
+  enum KindType {
+    Basic,   ///< A normal use, with no folding.
+    Special, ///< A special case of basic, allowing -1 scales.
+    Address, ///< An address use; folding according to TargetLowering
+    ICmpZero ///< An equality icmp with both operands folded into one.
+    // TODO: Add a generic icmp too?
+  };
 
-/// RequiresTypeConversion - Returns true if converting Ty1 to Ty2 is not
-/// a nop.
-bool LoopStrengthReduce::RequiresTypeConversion(const Type *Ty1,
-                                                const Type *Ty2) {
-  if (Ty1 == Ty2)
-    return false;
-  Ty1 = SE->getEffectiveSCEVType(Ty1);
-  Ty2 = SE->getEffectiveSCEVType(Ty2);
-  if (Ty1 == Ty2)
-    return false;
-  if (Ty1->canLosslesslyBitCastTo(Ty2))
-    return false;
-  if (TLI && TLI->isTruncateFree(Ty1, Ty2))
-    return false;
-  return true;
-}
+  KindType Kind;
+  const Type *AccessTy;
 
-/// CheckForIVReuse - Returns the multiple if the stride is the multiple
-/// of a previous stride and it is a legal value for the target addressing
-/// mode scale component and optional base reg. This allows the users of
-/// this stride to be rewritten as prev iv * factor. It returns 0 if no
-/// reuse is possible.  Factors can be negative on same targets, e.g. ARM.
-///
-/// If all uses are outside the loop, we don't require that all multiplies
-/// be folded into the addressing mode, nor even that the factor be constant;
-/// a multiply (executed once) outside the loop is better than another IV
-/// within.  Well, usually.
-const SCEV *LoopStrengthReduce::CheckForIVReuse(bool HasBaseReg,
-                                bool AllUsesAreAddresses,
-                                bool AllUsesAreOutsideLoop,
-                                const SCEV *Stride,
-                                IVExpr &IV, const Type *Ty,
-                                const std::vector<BasedUser>& UsersToProcess) {
-  if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Stride)) {
-    int64_t SInt = SC->getValue()->getSExtValue();
-    for (unsigned NewStride = 0, e = IU->StrideOrder.size();
-         NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
-                IVsByStride.find(IU->StrideOrder[NewStride]);
-      if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first))
-        continue;
-      // The other stride has no uses, don't reuse it.
-      std::map<const SCEV *, IVUsersOfOneStride *>::iterator UI =
-        IU->IVUsesByStride.find(IU->StrideOrder[NewStride]);
-      if (UI->second->Users.empty())
-        continue;
-      int64_t SSInt = cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
-      if (SI->first != Stride &&
-          (unsigned(abs64(SInt)) < SSInt || (SInt % SSInt) != 0))
-        continue;
-      int64_t Scale = SInt / SSInt;
-      // Check that this stride is valid for all the types used for loads and
-      // stores; if it can be used for some and not others, we might as well use
-      // the original stride everywhere, since we have to create the IV for it
-      // anyway. If the scale is 1, then we don't need to worry about folding
-      // multiplications.
-      if (Scale == 1 ||
-          (AllUsesAreAddresses &&
-           ValidScale(HasBaseReg, Scale, UsersToProcess))) {
-        // Prefer to reuse an IV with a base of zero.
-        for (std::vector<IVExpr>::iterator II = SI->second.IVs.begin(),
-               IE = SI->second.IVs.end(); II != IE; ++II)
-          // Only reuse previous IV if it would not require a type conversion
-          // and if the base difference can be folded.
-          if (II->Base->isZero() &&
-              !RequiresTypeConversion(II->Base->getType(), Ty)) {
-            IV = *II;
-            return SE->getIntegerSCEV(Scale, Stride->getType());
-          }
-        // Otherwise, settle for an IV with a foldable base.
-        if (AllUsesAreAddresses)
-          for (std::vector<IVExpr>::iterator II = SI->second.IVs.begin(),
-                 IE = SI->second.IVs.end(); II != IE; ++II)
-            // Only reuse previous IV if it would not require a type conversion
-            // and if the base difference can be folded.
-            if (SE->getEffectiveSCEVType(II->Base->getType()) ==
-                SE->getEffectiveSCEVType(Ty) &&
-                isa<SCEVConstant>(II->Base)) {
-              int64_t Base =
-                cast<SCEVConstant>(II->Base)->getValue()->getSExtValue();
-              if (Base > INT32_MIN && Base <= INT32_MAX &&
-                  ValidOffset(HasBaseReg, -Base * Scale,
-                              Scale, UsersToProcess)) {
-                IV = *II;
-                return SE->getIntegerSCEV(Scale, Stride->getType());
-              }
-            }
-      }
-    }
-  } else if (AllUsesAreOutsideLoop) {
-    // Accept nonconstant strides here; it is really really right to substitute
-    // an existing IV if we can.
-    for (unsigned NewStride = 0, e = IU->StrideOrder.size();
-         NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
-                IVsByStride.find(IU->StrideOrder[NewStride]);
-      if (SI == IVsByStride.end() || !isa<SCEVConstant>(SI->first))
-        continue;
-      int64_t SSInt = cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
-      if (SI->first != Stride && SSInt != 1)
-        continue;
-      for (std::vector<IVExpr>::iterator II = SI->second.IVs.begin(),
-             IE = SI->second.IVs.end(); II != IE; ++II)
-        // Accept nonzero base here.
-        // Only reuse previous IV if it would not require a type conversion.
-        if (!RequiresTypeConversion(II->Base->getType(), Ty)) {
-          IV = *II;
-          return Stride;
-        }
-    }
-    // Special case, old IV is -1*x and this one is x.  Can treat this one as
-    // -1*old.
-    for (unsigned NewStride = 0, e = IU->StrideOrder.size();
-         NewStride != e; ++NewStride) {
-      std::map<const SCEV *, IVsOfOneStride>::iterator SI =
-                IVsByStride.find(IU->StrideOrder[NewStride]);
-      if (SI == IVsByStride.end())
-        continue;
-      if (const SCEVMulExpr *ME = dyn_cast<SCEVMulExpr>(SI->first))
-        if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(ME->getOperand(0)))
-          if (Stride == ME->getOperand(1) &&
-              SC->getValue()->getSExtValue() == -1LL)
-            for (std::vector<IVExpr>::iterator II = SI->second.IVs.begin(),
-                   IE = SI->second.IVs.end(); II != IE; ++II)
-              // Accept nonzero base here.
-              // Only reuse previous IV if it would not require type conversion.
-              if (!RequiresTypeConversion(II->Base->getType(), Ty)) {
-                IV = *II;
-                return SE->getIntegerSCEV(-1LL, Stride->getType());
-              }
-    }
-  }
-  return SE->getIntegerSCEV(0, Stride->getType());
-}
-
-/// PartitionByIsUseOfPostIncrementedValue - Simple boolean predicate that
-/// returns true if Val's isUseOfPostIncrementedValue is true.
-static bool PartitionByIsUseOfPostIncrementedValue(const BasedUser &Val) {
-  return Val.isUseOfPostIncrementedValue;
-}
-
-/// isNonConstantNegative - Return true if the specified scev is negated, but
-/// not a constant.
-static bool isNonConstantNegative(const SCEV *Expr) {
-  const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(Expr);
-  if (!Mul) return false;
-
-  // If there is a constant factor, it will be first.
-  const SCEVConstant *SC = dyn_cast<SCEVConstant>(Mul->getOperand(0));
-  if (!SC) return false;
-
-  // Return true if the value is negative, this matches things like (-42 * V).
-  return SC->getValue()->getValue().isNegative();
-}
-
-/// CollectIVUsers - Transform our list of users and offsets to a bit more
-/// complex table. In this new vector, each 'BasedUser' contains 'Base', the
-/// base of the strided accesses, as well as the old information from Uses. We
-/// progressively move information from the Base field to the Imm field, until
-/// we eventually have the full access expression to rewrite the use.
-const SCEV *LoopStrengthReduce::CollectIVUsers(const SCEV *Stride,
-                                              IVUsersOfOneStride &Uses,
-                                              Loop *L,
-                                              bool &AllUsesAreAddresses,
-                                              bool &AllUsesAreOutsideLoop,
-                                       std::vector<BasedUser> &UsersToProcess) {
-  // FIXME: Generalize to non-affine IV's.
-  if (!Stride->isLoopInvariant(L))
-    return SE->getIntegerSCEV(0, Stride->getType());
-
-  UsersToProcess.reserve(Uses.Users.size());
-  for (ilist<IVStrideUse>::iterator I = Uses.Users.begin(),
-       E = Uses.Users.end(); I != E; ++I) {
-    UsersToProcess.push_back(BasedUser(*I, SE));
-
-    // Move any loop variant operands from the offset field to the immediate
-    // field of the use, so that we don't try to use something before it is
-    // computed.
-    MoveLoopVariantsToImmediateField(UsersToProcess.back().Base,
-                                     UsersToProcess.back().Imm, L, SE);
-    assert(UsersToProcess.back().Base->isLoopInvariant(L) &&
-           "Base value is not loop invariant!");
-  }
-
-  // We now have a whole bunch of uses of like-strided induction variables, but
-  // they might all have different bases.  We want to emit one PHI node for this
-  // stride which we fold as many common expressions (between the IVs) into as
-  // possible.  Start by identifying the common expressions in the base values
-  // for the strides (e.g. if we have "A+C+B" and "A+B+D" as our bases, find
-  // "A+B"), emit it to the preheader, then remove the expression from the
-  // UsersToProcess base values.
-  const SCEV *CommonExprs =
-    RemoveCommonExpressionsFromUseBases(UsersToProcess, SE, L, TLI);
-
-  // Next, figure out what we can represent in the immediate fields of
-  // instructions.  If we can represent anything there, move it to the imm
-  // fields of the BasedUsers.  We do this so that it increases the commonality
-  // of the remaining uses.
-  unsigned NumPHI = 0;
-  bool HasAddress = false;
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i) {
-    // If the user is not in the current loop, this means it is using the exit
-    // value of the IV.  Do not put anything in the base, make sure it's all in
-    // the immediate field to allow as much factoring as possible.
-    if (!L->contains(UsersToProcess[i].Inst)) {
-      UsersToProcess[i].Imm = SE->getAddExpr(UsersToProcess[i].Imm,
-                                             UsersToProcess[i].Base);
-      UsersToProcess[i].Base =
-        SE->getIntegerSCEV(0, UsersToProcess[i].Base->getType());
-    } else {
-      // Not all uses are outside the loop.
-      AllUsesAreOutsideLoop = false;
-
-      // Addressing modes can be folded into loads and stores.  Be careful that
-      // the store is through the expression, not of the expression though.
-      bool isPHI = false;
-      bool isAddress = isAddressUse(UsersToProcess[i].Inst,
-                                    UsersToProcess[i].OperandValToReplace);
-      if (isa<PHINode>(UsersToProcess[i].Inst)) {
-        isPHI = true;
-        ++NumPHI;
-      }
+  SmallVector<int64_t, 8> Offsets;
+  int64_t MinOffset;
+  int64_t MaxOffset;
 
-      if (isAddress)
-        HasAddress = true;
+  /// AllFixupsOutsideLoop - This records whether all of the fixups using this
+  /// LSRUse are outside of the loop, in which case some special-case heuristics
+  /// may be used.
+  bool AllFixupsOutsideLoop;
 
-      // If this use isn't an address, then not all uses are addresses.
-      if (!isAddress && !isPHI)
-        AllUsesAreAddresses = false;
+  /// Formulae - A list of ways to build a value that can satisfy this user.
+  /// After the list is populated, one of these is selected heuristically and
+  /// used to formulate a replacement for OperandValToReplace in UserInst.
+  SmallVector<Formula, 12> Formulae;
 
-      MoveImmediateValues(TLI, UsersToProcess[i].Inst, UsersToProcess[i].Base,
-                          UsersToProcess[i].Imm, isAddress, L, SE);
-    }
-  }
+  /// Regs - The set of register candidates used by all formulae in this LSRUse.
+  SmallPtrSet<const SCEV *, 4> Regs;
 
-  // If one of the use is a PHI node and all other uses are addresses, still
-  // allow iv reuse. Essentially we are trading one constant multiplication
-  // for one fewer iv.
-  if (NumPHI > 1)
-    AllUsesAreAddresses = false;
+  LSRUse(KindType K, const Type *T) : Kind(K), AccessTy(T),
+                                      MinOffset(INT64_MAX),
+                                      MaxOffset(INT64_MIN),
+                                      AllFixupsOutsideLoop(true) {}
 
-  // There are no in-loop address uses.
-  if (AllUsesAreAddresses && (!HasAddress && !AllUsesAreOutsideLoop))
-    AllUsesAreAddresses = false;
+  bool InsertFormula(size_t LUIdx, const Formula &F);
 
-  return CommonExprs;
-}
+  void check() const;
 
-/// ShouldUseFullStrengthReductionMode - Test whether full strength-reduction
-/// is valid and profitable for the given set of users of a stride. In
-/// full strength-reduction mode, all addresses at the current stride are
-/// strength-reduced all the way down to pointer arithmetic.
-///
-bool LoopStrengthReduce::ShouldUseFullStrengthReductionMode(
-                                   const std::vector<BasedUser> &UsersToProcess,
-                                   const Loop *L,
-                                   bool AllUsesAreAddresses,
-                                   const SCEV *Stride) {
-  if (!EnableFullLSRMode)
-    return false;
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
 
-  // The heuristics below aim to avoid increasing register pressure, but
-  // fully strength-reducing all the addresses increases the number of
-  // add instructions, so don't do this when optimizing for size.
-  // TODO: If the loop is large, the savings due to simpler addresses
-  // may oughtweight the costs of the extra increment instructions.
-  if (L->getHeader()->getParent()->hasFnAttr(Attribute::OptimizeForSize))
-    return false;
+/// InsertFormula - If the given formula has not yet been inserted, add it to
+/// the list, and return true. Return false otherwise.
+bool LSRUse::InsertFormula(size_t LUIdx, const Formula &F) {
+  SmallVector<const SCEV *, 2> Key = F.BaseRegs;
+  if (F.ScaledReg) Key.push_back(F.ScaledReg);
+  // Unstable sort by host order ok, because this is only used for uniquifying.
+  std::sort(Key.begin(), Key.end());
 
-  // TODO: For now, don't do full strength reduction if there could
-  // potentially be greater-stride multiples of the current stride
-  // which could reuse the current stride IV.
-  if (IU->StrideOrder.back() != Stride)
+  if (!Uniquifier.insert(Key).second)
     return false;
 
-  // Iterate through the uses to find conditions that automatically rule out
-  // full-lsr mode.
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ) {
-    const SCEV *Base = UsersToProcess[i].Base;
-    const SCEV *Imm = UsersToProcess[i].Imm;
-    // If any users have a loop-variant component, they can't be fully
-    // strength-reduced.
-    if (Imm && !Imm->isLoopInvariant(L))
-      return false;
-    // If there are to users with the same base and the difference between
-    // the two Imm values can't be folded into the address, full
-    // strength reduction would increase register pressure.
-    do {
-      const SCEV *CurImm = UsersToProcess[i].Imm;
-      if ((CurImm || Imm) && CurImm != Imm) {
-        if (!CurImm) CurImm = SE->getIntegerSCEV(0, Stride->getType());
-        if (!Imm)       Imm = SE->getIntegerSCEV(0, Stride->getType());
-        const Instruction *Inst = UsersToProcess[i].Inst;
-        const Type *AccessTy = getAccessType(Inst);
-        const SCEV *Diff = SE->getMinusSCEV(UsersToProcess[i].Imm, Imm);
-        if (!Diff->isZero() &&
-            (!AllUsesAreAddresses ||
-             !fitsInAddressMode(Diff, AccessTy, TLI, /*HasBaseReg=*/true)))
-          return false;
-      }
-    } while (++i != e && Base == UsersToProcess[i].Base);
-  }
+  // Using a register to hold the value of 0 is not profitable.
+  assert((!F.ScaledReg || !F.ScaledReg->isZero()) &&
+         "Zero allocated in a scaled register!");
+#ifndef NDEBUG
+  for (SmallVectorImpl<const SCEV *>::const_iterator I =
+       F.BaseRegs.begin(), E = F.BaseRegs.end(); I != E; ++I)
+    assert(!(*I)->isZero() && "Zero allocated in a base register!");
+#endif
 
-  // If there's exactly one user in this stride, fully strength-reducing it
-  // won't increase register pressure. If it's starting from a non-zero base,
-  // it'll be simpler this way.
-  if (UsersToProcess.size() == 1 && !UsersToProcess[0].Base->isZero())
-    return true;
+  // Add the formula to the list.
+  Formulae.push_back(F);
 
-  // Otherwise, if there are any users in this stride that don't require
-  // a register for their base, full strength-reduction will increase
-  // register pressure.
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
-    if (UsersToProcess[i].Base->isZero())
-      return false;
+  // Record registers now being used by this use.
+  if (F.ScaledReg) Regs.insert(F.ScaledReg);
+  Regs.insert(F.BaseRegs.begin(), F.BaseRegs.end());
 
-  // Otherwise, go for it.
   return true;
 }
 
-/// InsertAffinePhi Create and insert a PHI node for an induction variable
-/// with the specified start and step values in the specified loop.
-///
-/// If NegateStride is true, the stride should be negated by using a
-/// subtract instead of an add.
-///
-/// Return the created phi node.
-///
-static PHINode *InsertAffinePhi(const SCEV *Start, const SCEV *Step,
-                                Instruction *IVIncInsertPt,
-                                const Loop *L,
-                                SCEVExpander &Rewriter) {
-  assert(Start->isLoopInvariant(L) && "New PHI start is not loop invariant!");
-  assert(Step->isLoopInvariant(L) && "New PHI stride is not loop invariant!");
-
-  BasicBlock *Header = L->getHeader();
-  BasicBlock *Preheader = L->getLoopPreheader();
-  BasicBlock *LatchBlock = L->getLoopLatch();
-  const Type *Ty = Start->getType();
-  Ty = Rewriter.SE.getEffectiveSCEVType(Ty);
-
-  PHINode *PN = PHINode::Create(Ty, "lsr.iv", Header->begin());
-  PN->addIncoming(Rewriter.expandCodeFor(Start, Ty, Preheader->getTerminator()),
-                  Preheader);
-
-  // If the stride is negative, insert a sub instead of an add for the
-  // increment.
-  bool isNegative = isNonConstantNegative(Step);
-  const SCEV *IncAmount = Step;
-  if (isNegative)
-    IncAmount = Rewriter.SE.getNegativeSCEV(Step);
-
-  // Insert an add instruction right before the terminator corresponding
-  // to the back-edge or just before the only use. The location is determined
-  // by the caller and passed in as IVIncInsertPt.
-  Value *StepV = Rewriter.expandCodeFor(IncAmount, Ty,
-                                        Preheader->getTerminator());
-  Instruction *IncV;
-  if (isNegative) {
-    IncV = BinaryOperator::CreateSub(PN, StepV, "lsr.iv.next",
-                                     IVIncInsertPt);
-  } else {
-    IncV = BinaryOperator::CreateAdd(PN, StepV, "lsr.iv.next",
-                                     IVIncInsertPt);
-  }
-  if (!isa<ConstantInt>(StepV)) ++NumVariable;
-
-  PN->addIncoming(IncV, LatchBlock);
-
-  ++NumInserted;
-  return PN;
-}
-
-static void SortUsersToProcess(std::vector<BasedUser> &UsersToProcess) {
-  // We want to emit code for users inside the loop first.  To do this, we
-  // rearrange BasedUser so that the entries at the end have
-  // isUseOfPostIncrementedValue = false, because we pop off the end of the
-  // vector (so we handle them first).
-  std::partition(UsersToProcess.begin(), UsersToProcess.end(),
-                 PartitionByIsUseOfPostIncrementedValue);
-
-  // Sort this by base, so that things with the same base are handled
-  // together.  By partitioning first and stable-sorting later, we are
-  // guaranteed that within each base we will pop off users from within the
-  // loop before users outside of the loop with a particular base.
-  //
-  // We would like to use stable_sort here, but we can't.  The problem is that
-  // const SCEV *'s don't have a deterministic ordering w.r.t to each other, so
-  // we don't have anything to do a '<' comparison on.  Because we think the
-  // number of uses is small, do a horrible bubble sort which just relies on
-  // ==.
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i) {
-    // Get a base value.
-    const SCEV *Base = UsersToProcess[i].Base;
-
-    // Compact everything with this base to be consecutive with this one.
-    for (unsigned j = i+1; j != e; ++j) {
-      if (UsersToProcess[j].Base == Base) {
-        std::swap(UsersToProcess[i+1], UsersToProcess[j]);
-        ++i;
-      }
-    }
+void LSRUse::print(raw_ostream &OS) const {
+  OS << "LSR Use: Kind=";
+  switch (Kind) {
+  case Basic:    OS << "Basic"; break;
+  case Special:  OS << "Special"; break;
+  case ICmpZero: OS << "ICmpZero"; break;
+  case Address:
+    OS << "Address of ";
+    if (isa<PointerType>(AccessTy))
+      OS << "pointer"; // the full pointer type could be really verbose
+    else
+      OS << *AccessTy;
   }
-}
 
-/// PrepareToStrengthReduceFully - Prepare to fully strength-reduce
-/// UsersToProcess, meaning lowering addresses all the way down to direct
-/// pointer arithmetic.
-///
-void
-LoopStrengthReduce::PrepareToStrengthReduceFully(
-                                        std::vector<BasedUser> &UsersToProcess,
-                                        const SCEV *Stride,
-                                        const SCEV *CommonExprs,
-                                        const Loop *L,
-                                        SCEVExpander &PreheaderRewriter) {
-  DEBUG(dbgs() << "  Fully reducing all users\n");
-
-  // Rewrite the UsersToProcess records, creating a separate PHI for each
-  // unique Base value.
-  Instruction *IVIncInsertPt = L->getLoopLatch()->getTerminator();
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ) {
-    // TODO: The uses are grouped by base, but not sorted. We arbitrarily
-    // pick the first Imm value here to start with, and adjust it for the
-    // other uses.
-    const SCEV *Imm = UsersToProcess[i].Imm;
-    const SCEV *Base = UsersToProcess[i].Base;
-    const SCEV *Start = SE->getAddExpr(CommonExprs, Base, Imm);
-    PHINode *Phi = InsertAffinePhi(Start, Stride, IVIncInsertPt, L,
-                                   PreheaderRewriter);
-    // Loop over all the users with the same base.
-    do {
-      UsersToProcess[i].Base = SE->getIntegerSCEV(0, Stride->getType());
-      UsersToProcess[i].Imm = SE->getMinusSCEV(UsersToProcess[i].Imm, Imm);
-      UsersToProcess[i].Phi = Phi;
-      assert(UsersToProcess[i].Imm->isLoopInvariant(L) &&
-             "ShouldUseFullStrengthReductionMode should reject this!");
-    } while (++i != e && Base == UsersToProcess[i].Base);
-  }
-}
-
-/// FindIVIncInsertPt - Return the location to insert the increment instruction.
-/// If the only use if a use of postinc value, (must be the loop termination
-/// condition), then insert it just before the use.
-static Instruction *FindIVIncInsertPt(std::vector<BasedUser> &UsersToProcess,
-                                      const Loop *L) {
-  if (UsersToProcess.size() == 1 &&
-      UsersToProcess[0].isUseOfPostIncrementedValue &&
-      L->contains(UsersToProcess[0].Inst))
-    return UsersToProcess[0].Inst;
-  return L->getLoopLatch()->getTerminator();
-}
-
-/// PrepareToStrengthReduceWithNewPhi - Insert a new induction variable for the
-/// given users to share.
-///
-void
-LoopStrengthReduce::PrepareToStrengthReduceWithNewPhi(
-                                         std::vector<BasedUser> &UsersToProcess,
-                                         const SCEV *Stride,
-                                         const SCEV *CommonExprs,
-                                         Value *CommonBaseV,
-                                         Instruction *IVIncInsertPt,
-                                         const Loop *L,
-                                         SCEVExpander &PreheaderRewriter) {
-  DEBUG(dbgs() << "  Inserting new PHI:\n");
-
-  PHINode *Phi = InsertAffinePhi(SE->getUnknown(CommonBaseV),
-                                 Stride, IVIncInsertPt, L,
-                                 PreheaderRewriter);
-
-  // Remember this in case a later stride is multiple of this.
-  IVsByStride[Stride].addIV(Stride, CommonExprs, Phi);
-
-  // All the users will share this new IV.
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
-    UsersToProcess[i].Phi = Phi;
-
-  DEBUG(dbgs() << "    IV=");
-  DEBUG(WriteAsOperand(dbgs(), Phi, /*PrintType=*/false));
-  DEBUG(dbgs() << "\n");
-}
-
-/// PrepareToStrengthReduceFromSmallerStride - Prepare for the given users to
-/// reuse an induction variable with a stride that is a factor of the current
-/// induction variable.
-///
-void
-LoopStrengthReduce::PrepareToStrengthReduceFromSmallerStride(
-                                         std::vector<BasedUser> &UsersToProcess,
-                                         Value *CommonBaseV,
-                                         const IVExpr &ReuseIV,
-                                         Instruction *PreInsertPt) {
-  DEBUG(dbgs() << "  Rewriting in terms of existing IV of STRIDE "
-               << *ReuseIV.Stride << " and BASE " << *ReuseIV.Base << "\n");
-
-  // All the users will share the reused IV.
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
-    UsersToProcess[i].Phi = ReuseIV.PHI;
-
-  Constant *C = dyn_cast<Constant>(CommonBaseV);
-  if (C &&
-      (!C->isNullValue() &&
-       !fitsInAddressMode(SE->getUnknown(CommonBaseV), CommonBaseV->getType(),
-                         TLI, false)))
-    // We want the common base emitted into the preheader! This is just
-    // using cast as a copy so BitCast (no-op cast) is appropriate
-    CommonBaseV = new BitCastInst(CommonBaseV, CommonBaseV->getType(),
-                                  "commonbase", PreInsertPt);
-}
-
-static bool IsImmFoldedIntoAddrMode(GlobalValue *GV, int64_t Offset,
-                                    const Type *AccessTy,
-                                   std::vector<BasedUser> &UsersToProcess,
-                                   const TargetLowering *TLI) {
-  SmallVector<Instruction*, 16> AddrModeInsts;
-  for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i) {
-    if (UsersToProcess[i].isUseOfPostIncrementedValue)
-      continue;
-    ExtAddrMode AddrMode =
-      AddressingModeMatcher::Match(UsersToProcess[i].OperandValToReplace,
-                                   AccessTy, UsersToProcess[i].Inst,
-                                   AddrModeInsts, *TLI);
-    if (GV && GV != AddrMode.BaseGV)
-      return false;
-    if (Offset && !AddrMode.BaseOffs)
-      // FIXME: How to accurate check it's immediate offset is folded.
-      return false;
-    AddrModeInsts.clear();
+  OS << ", Offsets={";
+  for (SmallVectorImpl<int64_t>::const_iterator I = Offsets.begin(),
+       E = Offsets.end(); I != E; ++I) {
+    OS << *I;
+    if (next(I) != E)
+      OS << ',';
   }
-  return true;
-}
+  OS << '}';
 
-/// StrengthReduceIVUsersOfStride - Strength reduce all of the users of a single
-/// stride of IV.  All of the users may have different starting values, and this
-/// may not be the only stride.
-void
-LoopStrengthReduce::StrengthReduceIVUsersOfStride(const SCEV *Stride,
-                                                  IVUsersOfOneStride &Uses,
-                                                  Loop *L) {
-  // If all the users are moved to another stride, then there is nothing to do.
-  if (Uses.Users.empty())
-    return;
+  if (AllFixupsOutsideLoop)
+    OS << ", all-fixups-outside-loop";
+}
 
-  // Keep track if every use in UsersToProcess is an address. If they all are,
-  // we may be able to rewrite the entire collection of them in terms of a
-  // smaller-stride IV.
-  bool AllUsesAreAddresses = true;
-
-  // Keep track if every use of a single stride is outside the loop.  If so,
-  // we want to be more aggressive about reusing a smaller-stride IV; a
-  // multiply outside the loop is better than another IV inside.  Well, usually.
-  bool AllUsesAreOutsideLoop = true;
-
-  // Transform our list of users and offsets to a bit more complex table.  In
-  // this new vector, each 'BasedUser' contains 'Base' the base of the
-  // strided accessas well as the old information from Uses.  We progressively
-  // move information from the Base field to the Imm field, until we eventually
-  // have the full access expression to rewrite the use.
-  std::vector<BasedUser> UsersToProcess;
-  const SCEV *CommonExprs = CollectIVUsers(Stride, Uses, L, AllUsesAreAddresses,
-                                           AllUsesAreOutsideLoop,
-                                           UsersToProcess);
-
-  // Sort the UsersToProcess array so that users with common bases are
-  // next to each other.
-  SortUsersToProcess(UsersToProcess);
-
-  // If we managed to find some expressions in common, we'll need to carry
-  // their value in a register and add it in for each use. This will take up
-  // a register operand, which potentially restricts what stride values are
-  // valid.
-  bool HaveCommonExprs = !CommonExprs->isZero();
-  const Type *ReplacedTy = CommonExprs->getType();
-
-  // If all uses are addresses, consider sinking the immediate part of the
-  // common expression back into uses if they can fit in the immediate fields.
-  if (TLI && HaveCommonExprs && AllUsesAreAddresses) {
-    const SCEV *NewCommon = CommonExprs;
-    const SCEV *Imm = SE->getIntegerSCEV(0, ReplacedTy);
-    MoveImmediateValues(TLI, Type::getVoidTy(
-                        L->getLoopPreheader()->getContext()),
-                        NewCommon, Imm, true, L, SE);
-    if (!Imm->isZero()) {
-      bool DoSink = true;
-
-      // If the immediate part of the common expression is a GV, check if it's
-      // possible to fold it into the target addressing mode.
-      GlobalValue *GV = 0;
-      if (const SCEVUnknown *SU = dyn_cast<SCEVUnknown>(Imm))
-        GV = dyn_cast<GlobalValue>(SU->getValue());
-      int64_t Offset = 0;
-      if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Imm))
-        Offset = SC->getValue()->getSExtValue();
-      if (GV || Offset)
-        // Pass VoidTy as the AccessTy to be conservative, because
-        // there could be multiple access types among all the uses.
-        DoSink = IsImmFoldedIntoAddrMode(GV, Offset,
-                          Type::getVoidTy(L->getLoopPreheader()->getContext()),
-                                         UsersToProcess, TLI);
-
-      if (DoSink) {
-        DEBUG(dbgs() << "  Sinking " << *Imm << " back down into uses\n");
-        for (unsigned i = 0, e = UsersToProcess.size(); i != e; ++i)
-          UsersToProcess[i].Imm = SE->getAddExpr(UsersToProcess[i].Imm, Imm);
-        CommonExprs = NewCommon;
-        HaveCommonExprs = !CommonExprs->isZero();
-        ++NumImmSunk;
-      }
-    }
-  }
+void LSRUse::dump() const {
+  print(errs()); errs() << '\n';
+}
 
-  // Now that we know what we need to do, insert the PHI node itself.
-  //
-  DEBUG(dbgs() << "LSR: Examining IVs of TYPE " << *ReplacedTy << " of STRIDE "
-               << *Stride << ":\n"
-               << "  Common base: " << *CommonExprs << "\n");
+/// isLegalUse - Test whether the use described by AM is "legal", meaning it can
+/// be completely folded into the user instruction at isel time. This includes
+/// address-mode folding and special icmp tricks.
+static bool isLegalUse(const TargetLowering::AddrMode &AM,
+                       LSRUse::KindType Kind, const Type *AccessTy,
+                       const TargetLowering *TLI) {
+  switch (Kind) {
+  case LSRUse::Address:
+    // If we have low-level target information, ask the target if it can
+    // completely fold this address.
+    if (TLI) return TLI->isLegalAddressingMode(AM, AccessTy);
+
+    // Otherwise, just guess that reg+reg addressing is legal.
+    return !AM.BaseGV && AM.BaseOffs == 0 && AM.Scale <= 1;
+
+  case LSRUse::ICmpZero:
+    // There's not even a target hook for querying whether it would be legal to
+    // fold a GV into an ICmp.
+    if (AM.BaseGV)
+      return false;
 
-  SCEVExpander Rewriter(*SE);
-  SCEVExpander PreheaderRewriter(*SE);
+    // ICmp only has two operands; don't allow more than two non-trivial parts.
+    if (AM.Scale != 0 && AM.HasBaseReg && AM.BaseOffs != 0)
+      return false;
 
-  BasicBlock  *Preheader = L->getLoopPreheader();
-  Instruction *PreInsertPt = Preheader->getTerminator();
-  BasicBlock *LatchBlock = L->getLoopLatch();
-  Instruction *IVIncInsertPt = LatchBlock->getTerminator();
-
-  Value *CommonBaseV = Constant::getNullValue(ReplacedTy);
-
-  const SCEV *RewriteFactor = SE->getIntegerSCEV(0, ReplacedTy);
-  IVExpr   ReuseIV(SE->getIntegerSCEV(0,
-                                    Type::getInt32Ty(Preheader->getContext())),
-                   SE->getIntegerSCEV(0,
-                                    Type::getInt32Ty(Preheader->getContext())),
-                   0);
-
-  // Choose a strength-reduction strategy and prepare for it by creating
-  // the necessary PHIs and adjusting the bookkeeping.
-  if (ShouldUseFullStrengthReductionMode(UsersToProcess, L,
-                                         AllUsesAreAddresses, Stride)) {
-    PrepareToStrengthReduceFully(UsersToProcess, Stride, CommonExprs, L,
-                                 PreheaderRewriter);
-  } else {
-    // Emit the initial base value into the loop preheader.
-    CommonBaseV = PreheaderRewriter.expandCodeFor(CommonExprs, ReplacedTy,
-                                                  PreInsertPt);
-
-    // If all uses are addresses, check if it is possible to reuse an IV.  The
-    // new IV must have a stride that is a multiple of the old stride; the
-    // multiple must be a number that can be encoded in the scale field of the
-    // target addressing mode; and we must have a valid instruction after this
-    // substitution, including the immediate field, if any.
-    RewriteFactor = CheckForIVReuse(HaveCommonExprs, AllUsesAreAddresses,
-                                    AllUsesAreOutsideLoop,
-                                    Stride, ReuseIV, ReplacedTy,
-                                    UsersToProcess);
-    if (!RewriteFactor->isZero())
-      PrepareToStrengthReduceFromSmallerStride(UsersToProcess, CommonBaseV,
-                                               ReuseIV, PreInsertPt);
-    else {
-      IVIncInsertPt = FindIVIncInsertPt(UsersToProcess, L);
-      PrepareToStrengthReduceWithNewPhi(UsersToProcess, Stride, CommonExprs,
-                                        CommonBaseV, IVIncInsertPt,
-                                        L, PreheaderRewriter);
-    }
-  }
+    // ICmp only supports no scale or a -1 scale, as we can "fold" a -1 scale by
+    // putting the scaled register in the other operand of the icmp.
+    if (AM.Scale != 0 && AM.Scale != -1)
+      return false;
 
-  // Process all the users now, replacing their strided uses with
-  // strength-reduced forms.  This outer loop handles all bases, the inner
-  // loop handles all users of a particular base.
-  while (!UsersToProcess.empty()) {
-    const SCEV *Base = UsersToProcess.back().Base;
-    Instruction *Inst = UsersToProcess.back().Inst;
-
-    // Emit the code for Base into the preheader.
-    Value *BaseV = 0;
-    if (!Base->isZero()) {
-      BaseV = PreheaderRewriter.expandCodeFor(Base, 0, PreInsertPt);
-
-      DEBUG(dbgs() << "  INSERTING code for BASE = " << *Base << ":");
-      if (BaseV->hasName())
-        DEBUG(dbgs() << " Result value name = %" << BaseV->getName());
-      DEBUG(dbgs() << "\n");
-
-      // If BaseV is a non-zero constant, make sure that it gets inserted into
-      // the preheader, instead of being forward substituted into the uses.  We
-      // do this by forcing a BitCast (noop cast) to be inserted into the
-      // preheader in this case.
-      if (!fitsInAddressMode(Base, getAccessType(Inst), TLI, false) &&
-          isa<Constant>(BaseV)) {
-        // We want this constant emitted into the preheader! This is just
-        // using cast as a copy so BitCast (no-op cast) is appropriate
-        BaseV = new BitCastInst(BaseV, BaseV->getType(), "preheaderinsert",
-                                PreInsertPt);
-      }
+    // If we have low-level target information, ask the target if it can fold an
+    // integer immediate on an icmp.
+    if (AM.BaseOffs != 0) {
+      if (TLI) return TLI->isLegalICmpImmediate(-AM.BaseOffs);
+      return false;
     }
 
-    // Emit the code to add the immediate offset to the Phi value, just before
-    // the instructions that we identified as using this stride and base.
-    do {
-      // FIXME: Use emitted users to emit other users.
-      BasedUser &User = UsersToProcess.back();
-
-      DEBUG(dbgs() << "    Examining ");
-      if (User.isUseOfPostIncrementedValue)
-        DEBUG(dbgs() << "postinc");
-      else
-        DEBUG(dbgs() << "preinc");
-      DEBUG(dbgs() << " use ");
-      DEBUG(WriteAsOperand(dbgs(), UsersToProcess.back().OperandValToReplace,
-                           /*PrintType=*/false));
-      DEBUG(dbgs() << " in Inst: " << *User.Inst);
-
-      // If this instruction wants to use the post-incremented value, move it
-      // after the post-inc and use its value instead of the PHI.
-      Value *RewriteOp = User.Phi;
-      if (User.isUseOfPostIncrementedValue) {
-        RewriteOp = User.Phi->getIncomingValueForBlock(LatchBlock);
-        // If this user is in the loop, make sure it is the last thing in the
-        // loop to ensure it is dominated by the increment. In case it's the
-        // only use of the iv, the increment instruction is already before the
-        // use.
-        if (L->contains(User.Inst) && User.Inst != IVIncInsertPt)
-          User.Inst->moveBefore(IVIncInsertPt);
-      }
-
-      const SCEV *RewriteExpr = SE->getUnknown(RewriteOp);
-
-      if (SE->getEffectiveSCEVType(RewriteOp->getType()) !=
-          SE->getEffectiveSCEVType(ReplacedTy)) {
-        assert(SE->getTypeSizeInBits(RewriteOp->getType()) >
-               SE->getTypeSizeInBits(ReplacedTy) &&
-               "Unexpected widening cast!");
-        RewriteExpr = SE->getTruncateExpr(RewriteExpr, ReplacedTy);
-      }
+    return true;
 
-      // If we had to insert new instructions for RewriteOp, we have to
-      // consider that they may not have been able to end up immediately
-      // next to RewriteOp, because non-PHI instructions may never precede
-      // PHI instructions in a block. In this case, remember where the last
-      // instruction was inserted so that if we're replacing a different
-      // PHI node, we can use the later point to expand the final
-      // RewriteExpr.
-      Instruction *NewBasePt = dyn_cast<Instruction>(RewriteOp);
-      if (RewriteOp == User.Phi) NewBasePt = 0;
-
-      // Clear the SCEVExpander's expression map so that we are guaranteed
-      // to have the code emitted where we expect it.
-      Rewriter.clear();
-
-      // If we are reusing the iv, then it must be multiplied by a constant
-      // factor to take advantage of the addressing mode scale component.
-      if (!RewriteFactor->isZero()) {
-        // If we're reusing an IV with a nonzero base (currently this happens
-        // only when all reuses are outside the loop) subtract that base here.
-        // The base has been used to initialize the PHI node but we don't want
-        // it here.
-        if (!ReuseIV.Base->isZero()) {
-          const SCEV *typedBase = ReuseIV.Base;
-          if (SE->getEffectiveSCEVType(RewriteExpr->getType()) !=
-              SE->getEffectiveSCEVType(ReuseIV.Base->getType())) {
-            // It's possible the original IV is a larger type than the new IV,
-            // in which case we have to truncate the Base.  We checked in
-            // RequiresTypeConversion that this is valid.
-            assert(SE->getTypeSizeInBits(RewriteExpr->getType()) <
-                   SE->getTypeSizeInBits(ReuseIV.Base->getType()) &&
-                   "Unexpected lengthening conversion!");
-            typedBase = SE->getTruncateExpr(ReuseIV.Base,
-                                            RewriteExpr->getType());
-          }
-          RewriteExpr = SE->getMinusSCEV(RewriteExpr, typedBase);
-        }
+  case LSRUse::Basic:
+    // Only handle single-register values.
+    return !AM.BaseGV && AM.Scale == 0 && AM.BaseOffs == 0;
 
-        // Multiply old variable, with base removed, by new scale factor.
-        RewriteExpr = SE->getMulExpr(RewriteFactor,
-                                     RewriteExpr);
-
-        // The common base is emitted in the loop preheader. But since we
-        // are reusing an IV, it has not been used to initialize the PHI node.
-        // Add it to the expression used to rewrite the uses.
-        // When this use is outside the loop, we earlier subtracted the
-        // common base, and are adding it back here.  Use the same expression
-        // as before, rather than CommonBaseV, so DAGCombiner will zap it.
-        if (!CommonExprs->isZero()) {
-          if (L->contains(User.Inst))
-            RewriteExpr = SE->getAddExpr(RewriteExpr,
-                                       SE->getUnknown(CommonBaseV));
-          else
-            RewriteExpr = SE->getAddExpr(RewriteExpr, CommonExprs);
-        }
-      }
-
-      // Now that we know what we need to do, insert code before User for the
-      // immediate and any loop-variant expressions.
-      if (BaseV)
-        // Add BaseV to the PHI value if needed.
-        RewriteExpr = SE->getAddExpr(RewriteExpr, SE->getUnknown(BaseV));
-
-      User.RewriteInstructionToUseNewBase(RewriteExpr, NewBasePt,
-                                          Rewriter, L, this,
-                                          DeadInsts, SE);
-
-      // Mark old value we replaced as possibly dead, so that it is eliminated
-      // if we just replaced the last use of that value.
-      DeadInsts.push_back(User.OperandValToReplace);
-
-      UsersToProcess.pop_back();
-      ++NumReduced;
-
-      // If there are any more users to process with the same base, process them
-      // now.  We sorted by base above, so we just have to check the last elt.
-    } while (!UsersToProcess.empty() && UsersToProcess.back().Base == Base);
-    // TODO: Next, find out which base index is the most common, pull it out.
-  }
-
-  // IMPORTANT TODO: Figure out how to partition the IV's with this stride, but
-  // different starting values, into different PHIs.
-}
-
-void LoopStrengthReduce::StrengthReduceIVUsers(Loop *L) {
-  // Note: this processes each stride/type pair individually.  All users
-  // passed into StrengthReduceIVUsersOfStride have the same type AND stride.
-  // Also, note that we iterate over IVUsesByStride indirectly by using
-  // StrideOrder. This extra layer of indirection makes the ordering of
-  // strides deterministic - not dependent on map order.
-  for (unsigned Stride = 0, e = IU->StrideOrder.size(); Stride != e; ++Stride) {
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[Stride]);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-    // FIXME: Generalize to non-affine IV's.
-    if (!SI->first->isLoopInvariant(L))
-      continue;
-    StrengthReduceIVUsersOfStride(SI->first, *SI->second, L);
+  case LSRUse::Special:
+    // Only handle -1 scales, or no scale.
+    return AM.Scale == 0 || AM.Scale == -1;
   }
+
+  return false;
 }
 
-/// FindIVUserForCond - If Cond has an operand that is an expression of an IV,
-/// set the IV user and stride information and return true, otherwise return
-/// false.
-bool LoopStrengthReduce::FindIVUserForCond(ICmpInst *Cond,
-                                           IVStrideUse *&CondUse,
-                                           const SCEV* &CondStride) {
-  for (unsigned Stride = 0, e = IU->StrideOrder.size();
-       Stride != e && !CondUse; ++Stride) {
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[Stride]);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-
-    for (ilist<IVStrideUse>::iterator UI = SI->second->Users.begin(),
-         E = SI->second->Users.end(); UI != E; ++UI)
-      if (UI->getUser() == Cond) {
-        // NOTE: we could handle setcc instructions with multiple uses here, but
-        // InstCombine does it as well for simple uses, it's not clear that it
-        // occurs enough in real life to handle.
-        CondUse = UI;
-        CondStride = SI->first;
-        return true;
-      }
+static bool isLegalUse(TargetLowering::AddrMode AM,
+                       int64_t MinOffset, int64_t MaxOffset,
+                       LSRUse::KindType Kind, const Type *AccessTy,
+                       const TargetLowering *TLI) {
+  // Check for overflow.
+  if (((int64_t)((uint64_t)AM.BaseOffs + MinOffset) > AM.BaseOffs) !=
+      (MinOffset > 0))
+    return false;
+  AM.BaseOffs = (uint64_t)AM.BaseOffs + MinOffset;
+  if (isLegalUse(AM, Kind, AccessTy, TLI)) {
+    AM.BaseOffs = (uint64_t)AM.BaseOffs - MinOffset;
+    // Check for overflow.
+    if (((int64_t)((uint64_t)AM.BaseOffs + MaxOffset) > AM.BaseOffs) !=
+        (MaxOffset > 0))
+      return false;
+    AM.BaseOffs = (uint64_t)AM.BaseOffs + MaxOffset;
+    return isLegalUse(AM, Kind, AccessTy, TLI);
   }
   return false;
 }
 
-namespace {
-  // Constant strides come first which in turns are sorted by their absolute
-  // values. If absolute values are the same, then positive strides comes first.
-  // e.g.
-  // 4, -1, X, 1, 2 ==> 1, -1, 2, 4, X
-  struct StrideCompare {
-    const ScalarEvolution *SE;
-    explicit StrideCompare(const ScalarEvolution *se) : SE(se) {}
-
-    bool operator()(const SCEV *LHS, const SCEV *RHS) {
-      const SCEVConstant *LHSC = dyn_cast<SCEVConstant>(LHS);
-      const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(RHS);
-      if (LHSC && RHSC) {
-        int64_t  LV = LHSC->getValue()->getSExtValue();
-        int64_t  RV = RHSC->getValue()->getSExtValue();
-        uint64_t ALV = (LV < 0) ? -LV : LV;
-        uint64_t ARV = (RV < 0) ? -RV : RV;
-        if (ALV == ARV) {
-          if (LV != RV)
-            return LV > RV;
-        } else {
-          return ALV < ARV;
-        }
+static bool isAlwaysFoldable(int64_t BaseOffs,
+                             GlobalValue *BaseGV,
+                             bool HasBaseReg,
+                             LSRUse::KindType Kind, const Type *AccessTy,
+                             const TargetLowering *TLI,
+                             ScalarEvolution &SE) {
+  // Fast-path: zero is always foldable.
+  if (BaseOffs == 0 && !BaseGV) return true;
+
+  // Conservatively, create an address with an immediate and a
+  // base and a scale.
+  TargetLowering::AddrMode AM;
+  AM.BaseOffs = BaseOffs;
+  AM.BaseGV = BaseGV;
+  AM.HasBaseReg = HasBaseReg;
+  AM.Scale = Kind == LSRUse::ICmpZero ? -1 : 1;
+
+  return isLegalUse(AM, Kind, AccessTy, TLI);
+}
 
-        // If it's the same value but different type, sort by bit width so
-        // that we emit larger induction variables before smaller
-        // ones, letting the smaller be re-written in terms of larger ones.
-        return SE->getTypeSizeInBits(RHS->getType()) <
-               SE->getTypeSizeInBits(LHS->getType());
-      }
-      return LHSC && !RHSC;
-    }
-  };
+static bool isAlwaysFoldable(const SCEV *S,
+                             int64_t MinOffset, int64_t MaxOffset,
+                             bool HasBaseReg,
+                             LSRUse::KindType Kind, const Type *AccessTy,
+                             const TargetLowering *TLI,
+                             ScalarEvolution &SE) {
+  // Fast-path: zero is always foldable.
+  if (S->isZero()) return true;
+
+  // Conservatively, create an address with an immediate and a
+  // base and a scale.
+  int64_t BaseOffs = ExtractImmediate(S, SE);
+  GlobalValue *BaseGV = ExtractSymbol(S, SE);
+
+  // If there's anything else involved, it's not foldable.
+  if (!S->isZero()) return false;
+
+  // Fast-path: zero is always foldable.
+  if (BaseOffs == 0 && !BaseGV) return true;
+
+  // Conservatively, create an address with an immediate and a
+  // base and a scale.
+  TargetLowering::AddrMode AM;
+  AM.BaseOffs = BaseOffs;
+  AM.BaseGV = BaseGV;
+  AM.HasBaseReg = HasBaseReg;
+  AM.Scale = Kind == LSRUse::ICmpZero ? -1 : 1;
+
+  return isLegalUse(AM, MinOffset, MaxOffset, Kind, AccessTy, TLI);
 }
 
-/// ChangeCompareStride - If a loop termination compare instruction is the
-/// only use of its stride, and the compaison is against a constant value,
-/// try eliminate the stride by moving the compare instruction to another
-/// stride and change its constant operand accordingly. e.g.
-///
-/// loop:
-/// ...
-/// v1 = v1 + 3
-/// v2 = v2 + 1
-/// if (v2 < 10) goto loop
-/// =>
-/// loop:
-/// ...
-/// v1 = v1 + 3
-/// if (v1 < 30) goto loop
-ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
-                                                  IVStrideUse* &CondUse,
-                                                  const SCEV* &CondStride,
-                                                  bool PostPass) {
-  // If there's only one stride in the loop, there's nothing to do here.
-  if (IU->StrideOrder.size() < 2)
-    return Cond;
-  // If there are other users of the condition's stride, don't bother
-  // trying to change the condition because the stride will still
-  // remain.
-  std::map<const SCEV *, IVUsersOfOneStride *>::iterator I =
-    IU->IVUsesByStride.find(CondStride);
-  if (I == IU->IVUsesByStride.end())
-    return Cond;
-  if (I->second->Users.size() > 1) {
-    for (ilist<IVStrideUse>::iterator II = I->second->Users.begin(),
-           EE = I->second->Users.end(); II != EE; ++II) {
-      if (II->getUser() == Cond)
-        continue;
-      if (!isInstructionTriviallyDead(II->getUser()))
-        return Cond;
-    }
+/// FormulaSorter - This class implements an ordering for formulae which sorts
+/// the by their standalone cost.
+class FormulaSorter {
+  /// These two sets are kept empty, so that we compute standalone costs.
+  DenseSet<const SCEV *> VisitedRegs;
+  SmallPtrSet<const SCEV *, 16> Regs;
+  Loop *L;
+  LSRUse *LU;
+  ScalarEvolution &SE;
+  DominatorTree &DT;
+
+public:
+  FormulaSorter(Loop *l, LSRUse &lu, ScalarEvolution &se, DominatorTree &dt)
+    : L(l), LU(&lu), SE(se), DT(dt) {}
+
+  bool operator()(const Formula &A, const Formula &B) {
+    Cost CostA;
+    CostA.RateFormula(A, Regs, VisitedRegs, L, LU->Offsets, SE, DT);
+    Regs.clear();
+    Cost CostB;
+    CostB.RateFormula(B, Regs, VisitedRegs, L, LU->Offsets, SE, DT);
+    Regs.clear();
+    return CostA < CostB;
+  }
+};
+
+/// LSRInstance - This class holds state for the main loop strength reduction
+/// logic.
+class LSRInstance {
+  IVUsers &IU;
+  ScalarEvolution &SE;
+  DominatorTree &DT;
+  const TargetLowering *const TLI;
+  Loop *const L;
+  bool Changed;
+
+  /// IVIncInsertPos - This is the insert position that the current loop's
+  /// induction variable increment should be placed. In simple loops, this is
+  /// the latch block's terminator. But in more complicated cases, this is a
+  /// position which will dominate all the in-loop post-increment users.
+  Instruction *IVIncInsertPos;
+
+  /// Factors - Interesting factors between use strides.
+  SmallSetVector<int64_t, 8> Factors;
+
+  /// Types - Interesting use types, to facilitate truncation reuse.
+  SmallSetVector<const Type *, 4> Types;
+
+  /// Fixups - The list of operands which are to be replaced.
+  SmallVector<LSRFixup, 16> Fixups;
+
+  /// Uses - The list of interesting uses.
+  SmallVector<LSRUse, 16> Uses;
+
+  /// RegUses - Track which uses use which register candidates.
+  RegUseTracker RegUses;
+
+  void OptimizeShadowIV();
+  bool FindIVUserForCond(ICmpInst *Cond, IVStrideUse *&CondUse);
+  ICmpInst *OptimizeMax(ICmpInst *Cond, IVStrideUse* &CondUse);
+  bool OptimizeLoopTermCond();
+
+  void CollectInterestingTypesAndFactors();
+  void CollectFixupsAndInitialFormulae();
+
+  LSRFixup &getNewFixup() {
+    Fixups.push_back(LSRFixup());
+    return Fixups.back();
   }
-  // Only handle constant strides for now.
-  const SCEVConstant *SC = dyn_cast<SCEVConstant>(CondStride);
-  if (!SC) return Cond;
-
-  ICmpInst::Predicate Predicate = Cond->getPredicate();
-  int64_t CmpSSInt = SC->getValue()->getSExtValue();
-  unsigned BitWidth = SE->getTypeSizeInBits(CondStride->getType());
-  uint64_t SignBit = 1ULL << (BitWidth-1);
-  const Type *CmpTy = Cond->getOperand(0)->getType();
-  const Type *NewCmpTy = NULL;
-  unsigned TyBits = SE->getTypeSizeInBits(CmpTy);
-  unsigned NewTyBits = 0;
-  const SCEV *NewStride = NULL;
-  Value *NewCmpLHS = NULL;
-  Value *NewCmpRHS = NULL;
-  int64_t Scale = 1;
-  const SCEV *NewOffset = SE->getIntegerSCEV(0, CmpTy);
-
-  if (ConstantInt *C = dyn_cast<ConstantInt>(Cond->getOperand(1))) {
-    int64_t CmpVal = C->getValue().getSExtValue();
-
-    // Check the relevant induction variable for conformance to
-    // the pattern.
-    const SCEV *IV = SE->getSCEV(Cond->getOperand(0));
-    const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
-    if (!AR || !AR->isAffine())
-      return Cond;
-
-    const SCEVConstant *StartC = dyn_cast<SCEVConstant>(AR->getStart());
-    // Check stride constant and the comparision constant signs to detect
-    // overflow.
-    if (StartC) {
-      if ((StartC->getValue()->getSExtValue() < CmpVal && CmpSSInt < 0) ||
-          (StartC->getValue()->getSExtValue() > CmpVal && CmpSSInt > 0))
-        return Cond;
-    } else {
-      // More restrictive check for the other cases.
-      if ((CmpVal & SignBit) != (CmpSSInt & SignBit))
-        return Cond;
-    }
-
-    // Look for a suitable stride / iv as replacement.
-    for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-      std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-        IU->IVUsesByStride.find(IU->StrideOrder[i]);
-      if (!isa<SCEVConstant>(SI->first) || SI->second->Users.empty())
-        continue;
-      int64_t SSInt = cast<SCEVConstant>(SI->first)->getValue()->getSExtValue();
-      if (SSInt == CmpSSInt ||
-          abs64(SSInt) < abs64(CmpSSInt) ||
-          (SSInt % CmpSSInt) != 0)
-        continue;
 
-      Scale = SSInt / CmpSSInt;
-      int64_t NewCmpVal = CmpVal * Scale;
+  // Support for sharing of LSRUses between LSRFixups.
+  typedef DenseMap<const SCEV *, size_t> UseMapTy;
+  UseMapTy UseMap;
+
+  bool reconcileNewOffset(LSRUse &LU, int64_t NewOffset,
+                          LSRUse::KindType Kind, const Type *AccessTy);
+
+  std::pair<size_t, int64_t> getUse(const SCEV *&Expr,
+                                    LSRUse::KindType Kind,
+                                    const Type *AccessTy);
+
+public:
+  void InsertInitialFormula(const SCEV *S, Loop *L, LSRUse &LU, size_t LUIdx);
+  void InsertSupplementalFormula(const SCEV *S, LSRUse &LU, size_t LUIdx);
+  void CountRegisters(const Formula &F, size_t LUIdx);
+  bool InsertFormula(LSRUse &LU, unsigned LUIdx, const Formula &F);
+
+  void CollectLoopInvariantFixupsAndFormulae();
+
+  void GenerateReassociations(LSRUse &LU, unsigned LUIdx, Formula Base,
+                              unsigned Depth = 0);
+  void GenerateCombinations(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateSymbolicOffsets(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateConstantOffsets(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateICmpZeroScales(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateScales(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateTruncates(LSRUse &LU, unsigned LUIdx, Formula Base);
+  void GenerateCrossUseConstantOffsets();
+  void GenerateAllReuseFormulae();
+
+  void FilterOutUndesirableDedicatedRegisters();
+  void NarrowSearchSpaceUsingHeuristics();
+
+  void SolveRecurse(SmallVectorImpl<const Formula *> &Solution,
+                    Cost &SolutionCost,
+                    SmallVectorImpl<const Formula *> &Workspace,
+                    const Cost &CurCost,
+                    const SmallPtrSet<const SCEV *, 16> &CurRegs,
+                    DenseSet<const SCEV *> &VisitedRegs) const;
+  void Solve(SmallVectorImpl<const Formula *> &Solution) const;
+
+  Value *Expand(const LSRFixup &LF,
+                const Formula &F,
+                BasicBlock::iterator IP, Loop *L, Instruction *IVIncInsertPos,
+                SCEVExpander &Rewriter,
+                SmallVectorImpl<WeakVH> &DeadInsts,
+                ScalarEvolution &SE, DominatorTree &DT) const;
+  void Rewrite(const LSRFixup &LF,
+               const Formula &F,
+               Loop *L, Instruction *IVIncInsertPos,
+               SCEVExpander &Rewriter,
+               SmallVectorImpl<WeakVH> &DeadInsts,
+               ScalarEvolution &SE, DominatorTree &DT,
+               Pass *P) const;
+  void ImplementSolution(const SmallVectorImpl<const Formula *> &Solution,
+                         Pass *P);
+
+  LSRInstance(const TargetLowering *tli, Loop *l, Pass *P);
+
+  bool getChanged() const { return Changed; }
+
+  void print_factors_and_types(raw_ostream &OS) const;
+  void print_fixups(raw_ostream &OS) const;
+  void print_uses(raw_ostream &OS) const;
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
 
-      // If old icmp value fits in icmp immediate field, but the new one doesn't
-      // try something else.
-      if (TLI &&
-          TLI->isLegalICmpImmediate(CmpVal) &&
-          !TLI->isLegalICmpImmediate(NewCmpVal))
-        continue;
+}
 
-      APInt Mul = APInt(BitWidth*2, CmpVal, true);
-      Mul = Mul * APInt(BitWidth*2, Scale, true);
-      // Check for overflow.
-      if (!Mul.isSignedIntN(BitWidth))
-        continue;
-      // Check for overflow in the stride's type too.
-      if (!Mul.isSignedIntN(SE->getTypeSizeInBits(SI->first->getType())))
-        continue;
+/// OptimizeShadowIV - If IV is used in a int-to-float cast
+/// inside the loop then try to eliminate the cast opeation.
+void LSRInstance::OptimizeShadowIV() {
+  const SCEV *BackedgeTakenCount = SE.getBackedgeTakenCount(L);
+  if (isa<SCEVCouldNotCompute>(BackedgeTakenCount))
+    return;
 
-      // Watch out for overflow.
-      if (ICmpInst::isSigned(Predicate) &&
-          (CmpVal & SignBit) != (NewCmpVal & SignBit))
-        continue;
+  for (IVUsers::const_iterator UI = IU.begin(), E = IU.end();
+       UI != E; /* empty */) {
+    IVUsers::const_iterator CandidateUI = UI;
+    ++UI;
+    Instruction *ShadowUse = CandidateUI->getUser();
+    const Type *DestTy = NULL;
 
-      // Pick the best iv to use trying to avoid a cast.
-      NewCmpLHS = NULL;
-      for (ilist<IVStrideUse>::iterator UI = SI->second->Users.begin(),
-             E = SI->second->Users.end(); UI != E; ++UI) {
-        Value *Op = UI->getOperandValToReplace();
-
-        // If the IVStrideUse implies a cast, check for an actual cast which
-        // can be used to find the original IV expression.
-        if (SE->getEffectiveSCEVType(Op->getType()) !=
-            SE->getEffectiveSCEVType(SI->first->getType())) {
-          CastInst *CI = dyn_cast<CastInst>(Op);
-          // If it's not a simple cast, it's complicated.
-          if (!CI)
-            continue;
-          // If it's a cast from a type other than the stride type,
-          // it's complicated.
-          if (CI->getOperand(0)->getType() != SI->first->getType())
-            continue;
-          // Ok, we found the IV expression in the stride's type.
-          Op = CI->getOperand(0);
-        }
+    /* If shadow use is a int->float cast then insert a second IV
+       to eliminate this cast.
 
-        NewCmpLHS = Op;
-        if (NewCmpLHS->getType() == CmpTy)
-          break;
-      }
-      if (!NewCmpLHS)
-        continue;
+         for (unsigned i = 0; i < n; ++i)
+           foo((double)i);
 
-      NewCmpTy = NewCmpLHS->getType();
-      NewTyBits = SE->getTypeSizeInBits(NewCmpTy);
-      const Type *NewCmpIntTy = IntegerType::get(Cond->getContext(), NewTyBits);
-      if (RequiresTypeConversion(NewCmpTy, CmpTy)) {
-        // Check if it is possible to rewrite it using
-        // an iv / stride of a smaller integer type.
-        unsigned Bits = NewTyBits;
-        if (ICmpInst::isSigned(Predicate))
-          --Bits;
-        uint64_t Mask = (1ULL << Bits) - 1;
-        if (((uint64_t)NewCmpVal & Mask) != (uint64_t)NewCmpVal)
-          continue;
-      }
+       is transformed into
 
-      // Don't rewrite if use offset is non-constant and the new type is
-      // of a different type.
-      // FIXME: too conservative?
-      if (NewTyBits != TyBits && !isa<SCEVConstant>(CondUse->getOffset()))
-        continue;
+         double d = 0.0;
+         for (unsigned i = 0; i < n; ++i, ++d)
+           foo(d);
+    */
+    if (UIToFPInst *UCast = dyn_cast<UIToFPInst>(CandidateUI->getUser()))
+      DestTy = UCast->getDestTy();
+    else if (SIToFPInst *SCast = dyn_cast<SIToFPInst>(CandidateUI->getUser()))
+      DestTy = SCast->getDestTy();
+    if (!DestTy) continue;
 
-      if (!PostPass) {
-        bool AllUsesAreAddresses = true;
-        bool AllUsesAreOutsideLoop = true;
-        std::vector<BasedUser> UsersToProcess;
-        const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
-                                                 AllUsesAreAddresses,
-                                                 AllUsesAreOutsideLoop,
-                                                 UsersToProcess);
-        // Avoid rewriting the compare instruction with an iv of new stride
-        // if it's likely the new stride uses will be rewritten using the
-        // stride of the compare instruction.
-        if (AllUsesAreAddresses &&
-            ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess))
-          continue;
-      }
+    if (TLI) {
+      // If target does not support DestTy natively then do not apply
+      // this transformation.
+      EVT DVT = TLI->getValueType(DestTy);
+      if (!TLI->isTypeLegal(DVT)) continue;
+    }
 
-      // Avoid rewriting the compare instruction with an iv which has
-      // implicit extension or truncation built into it.
-      // TODO: This is over-conservative.
-      if (SE->getTypeSizeInBits(CondUse->getOffset()->getType()) != TyBits)
-        continue;
+    PHINode *PH = dyn_cast<PHINode>(ShadowUse->getOperand(0));
+    if (!PH) continue;
+    if (PH->getNumIncomingValues() != 2) continue;
 
-      // If scale is negative, use swapped predicate unless it's testing
-      // for equality.
-      if (Scale < 0 && !Cond->isEquality())
-        Predicate = ICmpInst::getSwappedPredicate(Predicate);
+    const Type *SrcTy = PH->getType();
+    int Mantissa = DestTy->getFPMantissaWidth();
+    if (Mantissa == -1) continue;
+    if ((int)SE.getTypeSizeInBits(SrcTy) > Mantissa)
+      continue;
 
-      NewStride = IU->StrideOrder[i];
-      if (!isa<PointerType>(NewCmpTy))
-        NewCmpRHS = ConstantInt::get(NewCmpTy, NewCmpVal);
-      else {
-        Constant *CI = ConstantInt::get(NewCmpIntTy, NewCmpVal);
-        NewCmpRHS = ConstantExpr::getIntToPtr(CI, NewCmpTy);
-      }
-      NewOffset = TyBits == NewTyBits
-        ? SE->getMulExpr(CondUse->getOffset(),
-                         SE->getConstant(CmpTy, Scale))
-        : SE->getConstant(NewCmpIntTy,
-          cast<SCEVConstant>(CondUse->getOffset())->getValue()
-            ->getSExtValue()*Scale);
-      break;
+    unsigned Entry, Latch;
+    if (PH->getIncomingBlock(0) == L->getLoopPreheader()) {
+      Entry = 0;
+      Latch = 1;
+    } else {
+      Entry = 1;
+      Latch = 0;
     }
-  }
 
-  // Forgo this transformation if it the increment happens to be
-  // unfortunately positioned after the condition, and the condition
-  // has multiple uses which prevent it from being moved immediately
-  // before the branch. See
-  // test/Transforms/LoopStrengthReduce/change-compare-stride-trickiness-*.ll
-  // for an example of this situation.
-  if (!Cond->hasOneUse()) {
-    for (BasicBlock::iterator I = Cond, E = Cond->getParent()->end();
-         I != E; ++I)
-      if (I == NewCmpLHS)
-        return Cond;
-  }
+    ConstantInt *Init = dyn_cast<ConstantInt>(PH->getIncomingValue(Entry));
+    if (!Init) continue;
+    Constant *NewInit = ConstantFP::get(DestTy, Init->getZExtValue());
 
-  if (NewCmpRHS) {
-    // Create a new compare instruction using new stride / iv.
-    ICmpInst *OldCond = Cond;
-    // Insert new compare instruction.
-    Cond = new ICmpInst(OldCond, Predicate, NewCmpLHS, NewCmpRHS,
-                        L->getHeader()->getName() + ".termcond");
+    BinaryOperator *Incr =
+      dyn_cast<BinaryOperator>(PH->getIncomingValue(Latch));
+    if (!Incr) continue;
+    if (Incr->getOpcode() != Instruction::Add
+        && Incr->getOpcode() != Instruction::Sub)
+      continue;
+
+    /* Initialize new IV, double d = 0.0 in above example. */
+    ConstantInt *C = NULL;
+    if (Incr->getOperand(0) == PH)
+      C = dyn_cast<ConstantInt>(Incr->getOperand(1));
+    else if (Incr->getOperand(1) == PH)
+      C = dyn_cast<ConstantInt>(Incr->getOperand(0));
+    else
+      continue;
 
-    DEBUG(dbgs() << "    Change compare stride in Inst " << *OldCond);
-    DEBUG(dbgs() << " to " << *Cond << '\n');
+    if (!C) continue;
 
-    // Remove the old compare instruction. The old indvar is probably dead too.
-    DeadInsts.push_back(CondUse->getOperandValToReplace());
-    OldCond->replaceAllUsesWith(Cond);
-    OldCond->eraseFromParent();
+    // Ignore negative constants, as the code below doesn't handle them
+    // correctly. TODO: Remove this restriction.
+    if (!C->getValue().isStrictlyPositive()) continue;
 
-    IU->IVUsesByStride[NewStride]->addUser(NewOffset, Cond, NewCmpLHS);
-    CondUse = &IU->IVUsesByStride[NewStride]->Users.back();
-    CondStride = NewStride;
-    ++NumEliminated;
-    Changed = true;
+    /* Add new PHINode. */
+    PHINode *NewPH = PHINode::Create(DestTy, "IV.S.", PH);
+
+    /* create new increment. '++d' in above example. */
+    Constant *CFP = ConstantFP::get(DestTy, C->getZExtValue());
+    BinaryOperator *NewIncr =
+      BinaryOperator::Create(Incr->getOpcode() == Instruction::Add ?
+                               Instruction::FAdd : Instruction::FSub,
+                             NewPH, CFP, "IV.S.next.", Incr);
+
+    NewPH->addIncoming(NewInit, PH->getIncomingBlock(Entry));
+    NewPH->addIncoming(NewIncr, PH->getIncomingBlock(Latch));
+
+    /* Remove cast operation */
+    ShadowUse->replaceAllUsesWith(NewPH);
+    ShadowUse->eraseFromParent();
+    break;
   }
+}
 
-  return Cond;
+/// FindIVUserForCond - If Cond has an operand that is an expression of an IV,
+/// set the IV user and stride information and return true, otherwise return
+/// false.
+bool LSRInstance::FindIVUserForCond(ICmpInst *Cond,
+                                    IVStrideUse *&CondUse) {
+  for (IVUsers::iterator UI = IU.begin(), E = IU.end(); UI != E; ++UI)
+    if (UI->getUser() == Cond) {
+      // NOTE: we could handle setcc instructions with multiple uses here, but
+      // InstCombine does it as well for simple uses, it's not clear that it
+      // occurs enough in real life to handle.
+      CondUse = UI;
+      return true;
+    }
+  return false;
 }
 
 /// OptimizeMax - Rewrite the loop's terminating condition if it uses
@@ -2087,7 +1371,7 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
 /// are designed around them. The most obvious example of this is the
 /// LoopInfo analysis, which doesn't remember trip count values. It
 /// expects to be able to rediscover the trip count each time it is
-/// needed, and it does this using a simple analyis that only succeeds if
+/// needed, and it does this using a simple analysis that only succeeds if
 /// the loop has a canonical induction variable.
 ///
 /// However, when it comes time to generate code, the maximum operation
@@ -2097,8 +1381,7 @@ ICmpInst *LoopStrengthReduce::ChangeCompareStride(Loop *L, ICmpInst *Cond,
 /// rewriting their conditions from ICMP_NE back to ICMP_SLT, and deleting
 /// the instructions for the maximum computation.
 ///
-ICmpInst *LoopStrengthReduce::OptimizeMax(Loop *L, ICmpInst *Cond,
-                                          IVStrideUse* &CondUse) {
+ICmpInst *LSRInstance::OptimizeMax(ICmpInst *Cond, IVStrideUse* &CondUse) {
   // Check that the loop matches the pattern we're looking for.
   if (Cond->getPredicate() != CmpInst::ICMP_EQ &&
       Cond->getPredicate() != CmpInst::ICMP_NE)
@@ -2107,19 +1390,19 @@ ICmpInst *LoopStrengthReduce::OptimizeMax(Loop *L, ICmpInst *Cond,
   SelectInst *Sel = dyn_cast<SelectInst>(Cond->getOperand(1));
   if (!Sel || !Sel->hasOneUse()) return Cond;
 
-  const SCEV *BackedgeTakenCount = SE->getBackedgeTakenCount(L);
+  const SCEV *BackedgeTakenCount = SE.getBackedgeTakenCount(L);
   if (isa<SCEVCouldNotCompute>(BackedgeTakenCount))
     return Cond;
-  const SCEV *One = SE->getIntegerSCEV(1, BackedgeTakenCount->getType());
+  const SCEV *One = SE.getIntegerSCEV(1, BackedgeTakenCount->getType());
 
   // Add one to the backedge-taken count to get the trip count.
-  const SCEV *IterationCount = SE->getAddExpr(BackedgeTakenCount, One);
+  const SCEV *IterationCount = SE.getAddExpr(BackedgeTakenCount, One);
 
   // Check for a max calculation that matches the pattern.
   if (!isa<SCEVSMaxExpr>(IterationCount) && !isa<SCEVUMaxExpr>(IterationCount))
     return Cond;
   const SCEVNAryExpr *Max = cast<SCEVNAryExpr>(IterationCount);
-  if (Max != SE->getSCEV(Sel)) return Cond;
+  if (Max != SE.getSCEV(Sel)) return Cond;
 
   // To handle a max with more than two operands, this optimization would
   // require additional checking and setup.
@@ -2129,14 +1412,13 @@ ICmpInst *LoopStrengthReduce::OptimizeMax(Loop *L, ICmpInst *Cond,
   const SCEV *MaxLHS = Max->getOperand(0);
   const SCEV *MaxRHS = Max->getOperand(1);
   if (!MaxLHS || MaxLHS != One) return Cond;
-
   // Check the relevant induction variable for conformance to
   // the pattern.
-  const SCEV *IV = SE->getSCEV(Cond->getOperand(0));
+  const SCEV *IV = SE.getSCEV(Cond->getOperand(0));
   const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
   if (!AR || !AR->isAffine() ||
       AR->getStart() != One ||
-      AR->getStepRecurrence(*SE) != One)
+      AR->getStepRecurrence(SE) != One)
     return Cond;
 
   assert(AR->getLoop() == L &&
@@ -2145,9 +1427,9 @@ ICmpInst *LoopStrengthReduce::OptimizeMax(Loop *L, ICmpInst *Cond,
   // Check the right operand of the select, and remember it, as it will
   // be used in the new comparison instruction.
   Value *NewRHS = 0;
-  if (SE->getSCEV(Sel->getOperand(1)) == MaxRHS)
+  if (SE.getSCEV(Sel->getOperand(1)) == MaxRHS)
     NewRHS = Sel->getOperand(1);
-  else if (SE->getSCEV(Sel->getOperand(2)) == MaxRHS)
+  else if (SE.getSCEV(Sel->getOperand(2)) == MaxRHS)
     NewRHS = Sel->getOperand(2);
   if (!NewRHS) return Cond;
 
@@ -2174,552 +1456,1761 @@ ICmpInst *LoopStrengthReduce::OptimizeMax(Loop *L, ICmpInst *Cond,
   return NewCond;
 }
 
-/// OptimizeShadowIV - If IV is used in a int-to-float cast
-/// inside the loop then try to eliminate the cast opeation.
-void LoopStrengthReduce::OptimizeShadowIV(Loop *L) {
+/// OptimizeLoopTermCond - Change loop terminating condition to use the
+/// postinc iv when possible.
+bool
+LSRInstance::OptimizeLoopTermCond() {
+  SmallPtrSet<Instruction *, 4> PostIncs;
 
-  const SCEV *BackedgeTakenCount = SE->getBackedgeTakenCount(L);
-  if (isa<SCEVCouldNotCompute>(BackedgeTakenCount))
-    return;
+  BasicBlock *LatchBlock = L->getLoopLatch();
+  SmallVector<BasicBlock*, 8> ExitingBlocks;
+  L->getExitingBlocks(ExitingBlocks);
+
+  for (unsigned i = 0, e = ExitingBlocks.size(); i != e; ++i) {
+    BasicBlock *ExitingBlock = ExitingBlocks[i];
+
+    // Get the terminating condition for the loop if possible.  If we
+    // can, we want to change it to use a post-incremented version of its
+    // induction variable, to allow coalescing the live ranges for the IV into
+    // one register value.
+
+    BranchInst *TermBr = dyn_cast<BranchInst>(ExitingBlock->getTerminator());
+    if (!TermBr)
+      continue;
+    // FIXME: Overly conservative, termination condition could be an 'or' etc..
+    if (TermBr->isUnconditional() || !isa<ICmpInst>(TermBr->getCondition()))
+      continue;
 
-  for (unsigned Stride = 0, e = IU->StrideOrder.size(); Stride != e;
-       ++Stride) {
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[Stride]);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-    if (!isa<SCEVConstant>(SI->first))
+    // Search IVUsesByStride to find Cond's IVUse if there is one.
+    IVStrideUse *CondUse = 0;
+    ICmpInst *Cond = cast<ICmpInst>(TermBr->getCondition());
+    if (!FindIVUserForCond(Cond, CondUse))
       continue;
 
-    for (ilist<IVStrideUse>::iterator UI = SI->second->Users.begin(),
-           E = SI->second->Users.end(); UI != E; /* empty */) {
-      ilist<IVStrideUse>::iterator CandidateUI = UI;
-      ++UI;
-      Instruction *ShadowUse = CandidateUI->getUser();
-      const Type *DestTy = NULL;
-
-      /* If shadow use is a int->float cast then insert a second IV
-         to eliminate this cast.
-
-           for (unsigned i = 0; i < n; ++i)
-             foo((double)i);
-
-         is transformed into
-
-           double d = 0.0;
-           for (unsigned i = 0; i < n; ++i, ++d)
-             foo(d);
-      */
-      if (UIToFPInst *UCast = dyn_cast<UIToFPInst>(CandidateUI->getUser()))
-        DestTy = UCast->getDestTy();
-      else if (SIToFPInst *SCast = dyn_cast<SIToFPInst>(CandidateUI->getUser()))
-        DestTy = SCast->getDestTy();
-      if (!DestTy) continue;
-
-      if (TLI) {
-        // If target does not support DestTy natively then do not apply
-        // this transformation.
-        EVT DVT = TLI->getValueType(DestTy);
-        if (!TLI->isTypeLegal(DVT)) continue;
-      }
+    // If the trip count is computed in terms of a max (due to ScalarEvolution
+    // being unable to find a sufficient guard, for example), change the loop
+    // comparison to use SLT or ULT instead of NE.
+    // One consequence of doing this now is that it disrupts the count-down
+    // optimization. That's not always a bad thing though, because in such
+    // cases it may still be worthwhile to avoid a max.
+    Cond = OptimizeMax(Cond, CondUse);
+
+    // If this exiting block dominates the latch block, it may also use
+    // the post-inc value if it won't be shared with other uses.
+    // Check for dominance.
+    if (!DT.dominates(ExitingBlock, LatchBlock))
+      continue;
 
-      PHINode *PH = dyn_cast<PHINode>(ShadowUse->getOperand(0));
-      if (!PH) continue;
-      if (PH->getNumIncomingValues() != 2) continue;
+    // Conservatively avoid trying to use the post-inc value in non-latch
+    // exits if there may be pre-inc users in intervening blocks.
+    if (LatchBlock != ExitingBlock)
+      for (IVUsers::const_iterator UI = IU.begin(), E = IU.end(); UI != E; ++UI)
+        // Test if the use is reachable from the exiting block. This dominator
+        // query is a conservative approximation of reachability.
+        if (&*UI != CondUse &&
+            !DT.properlyDominates(UI->getUser()->getParent(), ExitingBlock)) {
+          // Conservatively assume there may be reuse if the quotient of their
+          // strides could be a legal scale.
+          const SCEV *A = CondUse->getStride();
+          const SCEV *B = UI->getStride();
+          if (SE.getTypeSizeInBits(A->getType()) !=
+              SE.getTypeSizeInBits(B->getType())) {
+            if (SE.getTypeSizeInBits(A->getType()) >
+                SE.getTypeSizeInBits(B->getType()))
+              B = SE.getSignExtendExpr(B, A->getType());
+            else
+              A = SE.getSignExtendExpr(A, B->getType());
+          }
+          if (const SCEVConstant *D =
+                dyn_cast_or_null<SCEVConstant>(getSDiv(B, A, SE))) {
+            // Stride of one or negative one can have reuse with non-addresses.
+            if (D->getValue()->isOne() ||
+                D->getValue()->isAllOnesValue())
+              goto decline_post_inc;
+            // Avoid weird situations.
+            if (D->getValue()->getValue().getMinSignedBits() >= 64 ||
+                D->getValue()->getValue().isMinSignedValue())
+              goto decline_post_inc;
+            // Without TLI, assume that any stride might be valid, and so any
+            // use might be shared.
+            if (!TLI)
+              goto decline_post_inc;
+            // Check for possible scaled-address reuse.
+            const Type *AccessTy = getAccessType(UI->getUser());
+            TargetLowering::AddrMode AM;
+            AM.Scale = D->getValue()->getSExtValue();
+            if (TLI->isLegalAddressingMode(AM, AccessTy))
+              goto decline_post_inc;
+            AM.Scale = -AM.Scale;
+            if (TLI->isLegalAddressingMode(AM, AccessTy))
+              goto decline_post_inc;
+          }
+        }
 
-      const Type *SrcTy = PH->getType();
-      int Mantissa = DestTy->getFPMantissaWidth();
-      if (Mantissa == -1) continue;
-      if ((int)SE->getTypeSizeInBits(SrcTy) > Mantissa)
-        continue;
+    DEBUG(dbgs() << "  Change loop exiting icmp to use postinc iv: "
+                 << *Cond << '\n');
 
-      unsigned Entry, Latch;
-      if (PH->getIncomingBlock(0) == L->getLoopPreheader()) {
-        Entry = 0;
-        Latch = 1;
+    // It's possible for the setcc instruction to be anywhere in the loop, and
+    // possible for it to have multiple users.  If it is not immediately before
+    // the exiting block branch, move it.
+    if (&*++BasicBlock::iterator(Cond) != TermBr) {
+      if (Cond->hasOneUse()) {
+        Cond->moveBefore(TermBr);
       } else {
-        Entry = 1;
-        Latch = 0;
+        // Clone the terminating condition and insert into the loopend.
+        ICmpInst *OldCond = Cond;
+        Cond = cast<ICmpInst>(Cond->clone());
+        Cond->setName(L->getHeader()->getName() + ".termcond");
+        ExitingBlock->getInstList().insert(TermBr, Cond);
+
+        // Clone the IVUse, as the old use still exists!
+        CondUse = &IU.AddUser(CondUse->getStride(), CondUse->getOffset(),
+                              Cond, CondUse->getOperandValToReplace());
+        TermBr->replaceUsesOfWith(OldCond, Cond);
       }
+    }
 
-      ConstantInt *Init = dyn_cast<ConstantInt>(PH->getIncomingValue(Entry));
-      if (!Init) continue;
-      Constant *NewInit = ConstantFP::get(DestTy, Init->getZExtValue());
+    // If we get to here, we know that we can transform the setcc instruction to
+    // use the post-incremented version of the IV, allowing us to coalesce the
+    // live ranges for the IV correctly.
+    CondUse->setOffset(SE.getMinusSCEV(CondUse->getOffset(),
+                                       CondUse->getStride()));
+    CondUse->setIsUseOfPostIncrementedValue(true);
+    Changed = true;
 
-      BinaryOperator *Incr =
-        dyn_cast<BinaryOperator>(PH->getIncomingValue(Latch));
-      if (!Incr) continue;
-      if (Incr->getOpcode() != Instruction::Add
-          && Incr->getOpcode() != Instruction::Sub)
-        continue;
+    PostIncs.insert(Cond);
+  decline_post_inc:;
+  }
 
-      /* Initialize new IV, double d = 0.0 in above example. */
-      ConstantInt *C = NULL;
-      if (Incr->getOperand(0) == PH)
-        C = dyn_cast<ConstantInt>(Incr->getOperand(1));
-      else if (Incr->getOperand(1) == PH)
-        C = dyn_cast<ConstantInt>(Incr->getOperand(0));
-      else
-        continue;
+  // Determine an insertion point for the loop induction variable increment. It
+  // must dominate all the post-inc comparisons we just set up, and it must
+  // dominate the loop latch edge.
+  IVIncInsertPos = L->getLoopLatch()->getTerminator();
+  for (SmallPtrSet<Instruction *, 4>::const_iterator I = PostIncs.begin(),
+       E = PostIncs.end(); I != E; ++I) {
+    BasicBlock *BB =
+      DT.findNearestCommonDominator(IVIncInsertPos->getParent(),
+                                    (*I)->getParent());
+    if (BB == (*I)->getParent())
+      IVIncInsertPos = *I;
+    else if (BB != IVIncInsertPos->getParent())
+      IVIncInsertPos = BB->getTerminator();
+  }
 
-      if (!C) continue;
+  return Changed;
+}
+
+bool
+LSRInstance::reconcileNewOffset(LSRUse &LU, int64_t NewOffset,
+                                LSRUse::KindType Kind, const Type *AccessTy) {
+  int64_t NewMinOffset = LU.MinOffset;
+  int64_t NewMaxOffset = LU.MaxOffset;
+  const Type *NewAccessTy = AccessTy;
+
+  // Check for a mismatched kind. It's tempting to collapse mismatched kinds to
+  // something conservative, however this can pessimize in the case that one of
+  // the uses will have all its uses outside the loop, for example.
+  if (LU.Kind != Kind)
+    return false;
+  // Conservatively assume HasBaseReg is true for now.
+  if (NewOffset < LU.MinOffset) {
+    if (!isAlwaysFoldable(LU.MaxOffset - NewOffset, 0, /*HasBaseReg=*/true,
+                          Kind, AccessTy, TLI, SE))
+      return false;
+    NewMinOffset = NewOffset;
+  } else if (NewOffset > LU.MaxOffset) {
+    if (!isAlwaysFoldable(NewOffset - LU.MinOffset, 0, /*HasBaseReg=*/true,
+                          Kind, AccessTy, TLI, SE))
+      return false;
+    NewMaxOffset = NewOffset;
+  }
+  // Check for a mismatched access type, and fall back conservatively as needed.
+  if (Kind == LSRUse::Address && AccessTy != LU.AccessTy)
+    NewAccessTy = Type::getVoidTy(AccessTy->getContext());
+
+  // Update the use.
+  LU.MinOffset = NewMinOffset;
+  LU.MaxOffset = NewMaxOffset;
+  LU.AccessTy = NewAccessTy;
+  if (NewOffset != LU.Offsets.back())
+    LU.Offsets.push_back(NewOffset);
+  return true;
+}
 
-      // Ignore negative constants, as the code below doesn't handle them
-      // correctly. TODO: Remove this restriction.
-      if (!C->getValue().isStrictlyPositive()) continue;
+/// getUse - Return an LSRUse index and an offset value for a fixup which
+/// needs the given expression, with the given kind and optional access type.
+/// Either reuse an exisitng use or create a new one, as needed.
+std::pair<size_t, int64_t>
+LSRInstance::getUse(const SCEV *&Expr,
+                    LSRUse::KindType Kind, const Type *AccessTy) {
+  const SCEV *Copy = Expr;
+  int64_t Offset = ExtractImmediate(Expr, SE);
+
+  // Basic uses can't accept any offset, for example.
+  if (!isAlwaysFoldable(Offset, 0, /*HasBaseReg=*/true,
+                        Kind, AccessTy, TLI, SE)) {
+    Expr = Copy;
+    Offset = 0;
+  }
 
-      /* Add new PHINode. */
-      PHINode *NewPH = PHINode::Create(DestTy, "IV.S.", PH);
+  std::pair<UseMapTy::iterator, bool> P =
+    UseMap.insert(std::make_pair(Expr, 0));
+  if (!P.second) {
+    // A use already existed with this base.
+    size_t LUIdx = P.first->second;
+    LSRUse &LU = Uses[LUIdx];
+    if (reconcileNewOffset(LU, Offset, Kind, AccessTy))
+      // Reuse this use.
+      return std::make_pair(LUIdx, Offset);
+  }
 
-      /* create new increment. '++d' in above example. */
-      Constant *CFP = ConstantFP::get(DestTy, C->getZExtValue());
-      BinaryOperator *NewIncr =
-        BinaryOperator::Create(Incr->getOpcode() == Instruction::Add ?
-                                 Instruction::FAdd : Instruction::FSub,
-                               NewPH, CFP, "IV.S.next.", Incr);
+  // Create a new use.
+  size_t LUIdx = Uses.size();
+  P.first->second = LUIdx;
+  Uses.push_back(LSRUse(Kind, AccessTy));
+  LSRUse &LU = Uses[LUIdx];
 
-      NewPH->addIncoming(NewInit, PH->getIncomingBlock(Entry));
-      NewPH->addIncoming(NewIncr, PH->getIncomingBlock(Latch));
+  // We don't need to track redundant offsets, but we don't need to go out
+  // of our way here to avoid them.
+  if (LU.Offsets.empty() || Offset != LU.Offsets.back())
+    LU.Offsets.push_back(Offset);
 
-      /* Remove cast operation */
-      ShadowUse->replaceAllUsesWith(NewPH);
-      ShadowUse->eraseFromParent();
-      NumShadow++;
-      break;
+  LU.MinOffset = Offset;
+  LU.MaxOffset = Offset;
+  return std::make_pair(LUIdx, Offset);
+}
+
+void LSRInstance::CollectInterestingTypesAndFactors() {
+  SmallSetVector<const SCEV *, 4> Strides;
+
+  // Collect interesting types and factors.
+  for (IVUsers::const_iterator UI = IU.begin(), E = IU.end(); UI != E; ++UI) {
+    const SCEV *Stride = UI->getStride();
+
+    // Collect interesting types.
+    Types.insert(SE.getEffectiveSCEVType(Stride->getType()));
+
+    // Collect interesting factors.
+    for (SmallSetVector<const SCEV *, 4>::const_iterator NewStrideIter =
+         Strides.begin(), SEnd = Strides.end(); NewStrideIter != SEnd;
+         ++NewStrideIter) {
+      const SCEV *OldStride = Stride;
+      const SCEV *NewStride = *NewStrideIter;
+      if (OldStride == NewStride)
+        continue;
+
+      if (SE.getTypeSizeInBits(OldStride->getType()) !=
+          SE.getTypeSizeInBits(NewStride->getType())) {
+        if (SE.getTypeSizeInBits(OldStride->getType()) >
+            SE.getTypeSizeInBits(NewStride->getType()))
+          NewStride = SE.getSignExtendExpr(NewStride, OldStride->getType());
+        else
+          OldStride = SE.getSignExtendExpr(OldStride, NewStride->getType());
+      }
+      if (const SCEVConstant *Factor =
+            dyn_cast_or_null<SCEVConstant>(getSDiv(NewStride, OldStride,
+                                                   SE, true))) {
+        if (Factor->getValue()->getValue().getMinSignedBits() <= 64)
+          Factors.insert(Factor->getValue()->getValue().getSExtValue());
+      } else if (const SCEVConstant *Factor =
+                   dyn_cast_or_null<SCEVConstant>(getSDiv(OldStride, NewStride,
+                                                          SE, true))) {
+        if (Factor->getValue()->getValue().getMinSignedBits() <= 64)
+          Factors.insert(Factor->getValue()->getValue().getSExtValue());
+      }
     }
+    Strides.insert(Stride);
   }
-}
 
-/// OptimizeIndvars - Now that IVUsesByStride is set up with all of the indvar
-/// uses in the loop, look to see if we can eliminate some, in favor of using
-/// common indvars for the different uses.
-void LoopStrengthReduce::OptimizeIndvars(Loop *L) {
-  // TODO: implement optzns here.
+  // If all uses use the same type, don't bother looking for truncation-based
+  // reuse.
+  if (Types.size() == 1)
+    Types.clear();
 
-  OptimizeShadowIV(L);
+  DEBUG(print_factors_and_types(dbgs()));
 }
 
-bool LoopStrengthReduce::StrideMightBeShared(const SCEV* Stride, Loop *L,
-                                             bool CheckPreInc) {
-  int64_t SInt = cast<SCEVConstant>(Stride)->getValue()->getSExtValue();
-  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(IU->StrideOrder[i]);
-    const SCEV *Share = SI->first;
-    if (!isa<SCEVConstant>(SI->first) || Share == Stride)
-      continue;
-    int64_t SSInt = cast<SCEVConstant>(Share)->getValue()->getSExtValue();
-    if (SSInt == SInt)
-      return true; // This can definitely be reused.
-    if (unsigned(abs64(SSInt)) < SInt || (SSInt % SInt) != 0)
-      continue;
-    int64_t Scale = SSInt / SInt;
-    bool AllUsesAreAddresses = true;
-    bool AllUsesAreOutsideLoop = true;
-    std::vector<BasedUser> UsersToProcess;
-    const SCEV *CommonExprs = CollectIVUsers(SI->first, *SI->second, L,
-                                             AllUsesAreAddresses,
-                                             AllUsesAreOutsideLoop,
-                                             UsersToProcess);
-    if (AllUsesAreAddresses &&
-        ValidScale(!CommonExprs->isZero(), Scale, UsersToProcess)) {
-      if (!CheckPreInc)
-        return true;
-      // Any pre-inc iv use?
-      IVUsersOfOneStride &StrideUses = *IU->IVUsesByStride[Share];
-      for (ilist<IVStrideUse>::iterator I = StrideUses.Users.begin(),
-             E = StrideUses.Users.end(); I != E; ++I) {
-        if (!I->isUseOfPostIncrementedValue())
-          return true;
+void LSRInstance::CollectFixupsAndInitialFormulae() {
+  for (IVUsers::const_iterator UI = IU.begin(), E = IU.end(); UI != E; ++UI) {
+    // Record the uses.
+    LSRFixup &LF = getNewFixup();
+    LF.UserInst = UI->getUser();
+    LF.OperandValToReplace = UI->getOperandValToReplace();
+    if (UI->isUseOfPostIncrementedValue())
+      LF.PostIncLoop = L;
+
+    LSRUse::KindType Kind = LSRUse::Basic;
+    const Type *AccessTy = 0;
+    if (isAddressUse(LF.UserInst, LF.OperandValToReplace)) {
+      Kind = LSRUse::Address;
+      AccessTy = getAccessType(LF.UserInst);
+    }
+
+    const SCEV *S = IU.getCanonicalExpr(*UI);
+
+    // Equality (== and !=) ICmps are special. We can rewrite (i == N) as
+    // (N - i == 0), and this allows (N - i) to be the expression that we work
+    // with rather than just N or i, so we can consider the register
+    // requirements for both N and i at the same time. Limiting this code to
+    // equality icmps is not a problem because all interesting loops use
+    // equality icmps, thanks to IndVarSimplify.
+    if (ICmpInst *CI = dyn_cast<ICmpInst>(LF.UserInst))
+      if (CI->isEquality()) {
+        // Swap the operands if needed to put the OperandValToReplace on the
+        // left, for consistency.
+        Value *NV = CI->getOperand(1);
+        if (NV == LF.OperandValToReplace) {
+          CI->setOperand(1, CI->getOperand(0));
+          CI->setOperand(0, NV);
+        }
+
+        // x == y  -->  x - y == 0
+        const SCEV *N = SE.getSCEV(NV);
+        if (N->isLoopInvariant(L)) {
+          Kind = LSRUse::ICmpZero;
+          S = SE.getMinusSCEV(N, S);
+        }
+
+        // -1 and the negations of all interesting strides (except the negation
+        // of -1) are now also interesting.
+        for (size_t i = 0, e = Factors.size(); i != e; ++i)
+          if (Factors[i] != -1)
+            Factors.insert(-(uint64_t)Factors[i]);
+        Factors.insert(-1);
       }
+
+    // Set up the initial formula for this use.
+    std::pair<size_t, int64_t> P = getUse(S, Kind, AccessTy);
+    LF.LUIdx = P.first;
+    LF.Offset = P.second;
+    LSRUse &LU = Uses[LF.LUIdx];
+    LU.AllFixupsOutsideLoop &= !L->contains(LF.UserInst);
+
+    // If this is the first use of this LSRUse, give it a formula.
+    if (LU.Formulae.empty()) {
+      InsertInitialFormula(S, L, LU, LF.LUIdx);
+      CountRegisters(LU.Formulae.back(), LF.LUIdx);
     }
   }
-  return false;
+
+  DEBUG(print_fixups(dbgs()));
 }
 
-/// isUsedByExitBranch - Return true if icmp is used by a loop terminating
-/// conditional branch or it's and / or with other conditions before being used
-/// as the condition.
-static bool isUsedByExitBranch(ICmpInst *Cond, Loop *L) {
-  BasicBlock *CondBB = Cond->getParent();
-  if (!L->isLoopExiting(CondBB))
-    return false;
-  BranchInst *TermBr = dyn_cast<BranchInst>(CondBB->getTerminator());
-  if (!TermBr || !TermBr->isConditional())
+void
+LSRInstance::InsertInitialFormula(const SCEV *S, Loop *L,
+                                  LSRUse &LU, size_t LUIdx) {
+  Formula F;
+  F.InitialMatch(S, L, SE, DT);
+  bool Inserted = InsertFormula(LU, LUIdx, F);
+  assert(Inserted && "Initial formula already exists!"); (void)Inserted;
+}
+
+void
+LSRInstance::InsertSupplementalFormula(const SCEV *S,
+                                       LSRUse &LU, size_t LUIdx) {
+  Formula F;
+  F.BaseRegs.push_back(S);
+  F.AM.HasBaseReg = true;
+  bool Inserted = InsertFormula(LU, LUIdx, F);
+  assert(Inserted && "Supplemental formula already exists!"); (void)Inserted;
+}
+
+/// CountRegisters - Note which registers are used by the given formula,
+/// updating RegUses.
+void LSRInstance::CountRegisters(const Formula &F, size_t LUIdx) {
+  if (F.ScaledReg)
+    RegUses.CountRegister(F.ScaledReg, LUIdx);
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = F.BaseRegs.begin(),
+       E = F.BaseRegs.end(); I != E; ++I)
+    RegUses.CountRegister(*I, LUIdx);
+}
+
+/// InsertFormula - If the given formula has not yet been inserted, add it to
+/// the list, and return true. Return false otherwise.
+bool LSRInstance::InsertFormula(LSRUse &LU, unsigned LUIdx, const Formula &F) {
+  if (!LU.InsertFormula(LUIdx, F))
     return false;
 
-  Value *User = *Cond->use_begin();
-  Instruction *UserInst = dyn_cast<Instruction>(User);
-  while (UserInst &&
-         (UserInst->getOpcode() == Instruction::And ||
-          UserInst->getOpcode() == Instruction::Or)) {
-    if (!UserInst->hasOneUse() || UserInst->getParent() != CondBB)
-      return false;
-    User = *User->use_begin();
-    UserInst = dyn_cast<Instruction>(User);
+  CountRegisters(F, LUIdx);
+  return true;
+}
+
+/// CollectLoopInvariantFixupsAndFormulae - Check for other uses of
+/// loop-invariant values which we're tracking. These other uses will pin these
+/// values in registers, making them less profitable for elimination.
+/// TODO: This currently misses non-constant addrec step registers.
+/// TODO: Should this give more weight to users inside the loop?
+void
+LSRInstance::CollectLoopInvariantFixupsAndFormulae() {
+  SmallVector<const SCEV *, 8> Worklist(RegUses.begin(), RegUses.end());
+  SmallPtrSet<const SCEV *, 8> Inserted;
+
+  while (!Worklist.empty()) {
+    const SCEV *S = Worklist.pop_back_val();
+
+    if (const SCEVNAryExpr *N = dyn_cast<SCEVNAryExpr>(S))
+      Worklist.insert(Worklist.end(), N->op_begin(), N->op_end());
+    else if (const SCEVCastExpr *C = dyn_cast<SCEVCastExpr>(S))
+      Worklist.push_back(C->getOperand());
+    else if (const SCEVUDivExpr *D = dyn_cast<SCEVUDivExpr>(S)) {
+      Worklist.push_back(D->getLHS());
+      Worklist.push_back(D->getRHS());
+    } else if (const SCEVUnknown *U = dyn_cast<SCEVUnknown>(S)) {
+      if (!Inserted.insert(U)) continue;
+      const Value *V = U->getValue();
+      if (const Instruction *Inst = dyn_cast<Instruction>(V))
+        if (L->contains(Inst)) continue;
+      for (Value::use_const_iterator UI = V->use_begin(), UE = V->use_end();
+           UI != UE; ++UI) {
+        const Instruction *UserInst = dyn_cast<Instruction>(*UI);
+        // Ignore non-instructions.
+        if (!UserInst)
+          continue;
+        // Ignore instructions in other functions (as can happen with
+        // Constants).
+        if (UserInst->getParent()->getParent() != L->getHeader()->getParent())
+          continue;
+        // Ignore instructions not dominated by the loop.
+        const BasicBlock *UseBB = !isa<PHINode>(UserInst) ?
+          UserInst->getParent() :
+          cast<PHINode>(UserInst)->getIncomingBlock(
+            PHINode::getIncomingValueNumForOperand(UI.getOperandNo()));
+        if (!DT.dominates(L->getHeader(), UseBB))
+          continue;
+        // Ignore uses which are part of other SCEV expressions, to avoid
+        // analyzing them multiple times.
+        if (SE.isSCEVable(UserInst->getType()) &&
+            !isa<SCEVUnknown>(SE.getSCEV(const_cast<Instruction *>(UserInst))))
+          continue;
+        // Ignore icmp instructions which are already being analyzed.
+        if (const ICmpInst *ICI = dyn_cast<ICmpInst>(UserInst)) {
+          unsigned OtherIdx = !UI.getOperandNo();
+          Value *OtherOp = const_cast<Value *>(ICI->getOperand(OtherIdx));
+          if (SE.getSCEV(OtherOp)->hasComputableLoopEvolution(L))
+            continue;
+        }
+
+        LSRFixup &LF = getNewFixup();
+        LF.UserInst = const_cast<Instruction *>(UserInst);
+        LF.OperandValToReplace = UI.getUse();
+        std::pair<size_t, int64_t> P = getUse(S, LSRUse::Basic, 0);
+        LF.LUIdx = P.first;
+        LF.Offset = P.second;
+        LSRUse &LU = Uses[LF.LUIdx];
+        LU.AllFixupsOutsideLoop &= L->contains(LF.UserInst);
+        InsertSupplementalFormula(U, LU, LF.LUIdx);
+        CountRegisters(LU.Formulae.back(), Uses.size() - 1);
+        break;
+      }
+    }
   }
-  return User == TermBr;
 }
 
-static bool ShouldCountToZero(ICmpInst *Cond, IVStrideUse* &CondUse,
-                              ScalarEvolution *SE, Loop *L,
-                              const TargetLowering *TLI = 0) {
-  if (!L->contains(Cond))
-    return false;
+/// CollectSubexprs - Split S into subexpressions which can be pulled out into
+/// separate registers. If C is non-null, multiply each subexpression by C.
+static void CollectSubexprs(const SCEV *S, const SCEVConstant *C,
+                            SmallVectorImpl<const SCEV *> &Ops,
+                            ScalarEvolution &SE) {
+  if (const SCEVAddExpr *Add = dyn_cast<SCEVAddExpr>(S)) {
+    // Break out add operands.
+    for (SCEVAddExpr::op_iterator I = Add->op_begin(), E = Add->op_end();
+         I != E; ++I)
+      CollectSubexprs(*I, C, Ops, SE);
+    return;
+  } else if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(S)) {
+    // Split a non-zero base out of an addrec.
+    if (!AR->getStart()->isZero()) {
+      CollectSubexprs(SE.getAddRecExpr(SE.getIntegerSCEV(0, AR->getType()),
+                                       AR->getStepRecurrence(SE),
+                                       AR->getLoop()), C, Ops, SE);
+      CollectSubexprs(AR->getStart(), C, Ops, SE);
+      return;
+    }
+  } else if (const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(S)) {
+    // Break (C * (a + b + c)) into C*a + C*b + C*c.
+    if (Mul->getNumOperands() == 2)
+      if (const SCEVConstant *Op0 =
+            dyn_cast<SCEVConstant>(Mul->getOperand(0))) {
+        CollectSubexprs(Mul->getOperand(1),
+                        C ? cast<SCEVConstant>(SE.getMulExpr(C, Op0)) : Op0,
+                        Ops, SE);
+        return;
+      }
+  }
 
-  if (!isa<SCEVConstant>(CondUse->getOffset()))
-    return false;
+  // Otherwise use the value itself.
+  Ops.push_back(C ? SE.getMulExpr(C, S) : S);
+}
 
-  // Handle only tests for equality for the moment.
-  if (!Cond->isEquality() || !Cond->hasOneUse())
-    return false;
-  if (!isUsedByExitBranch(Cond, L))
-    return false;
+/// GenerateReassociations - Split out subexpressions from adds and the bases of
+/// addrecs.
+void LSRInstance::GenerateReassociations(LSRUse &LU, unsigned LUIdx,
+                                         Formula Base,
+                                         unsigned Depth) {
+  // Arbitrarily cap recursion to protect compile time.
+  if (Depth >= 3) return;
+
+  for (size_t i = 0, e = Base.BaseRegs.size(); i != e; ++i) {
+    const SCEV *BaseReg = Base.BaseRegs[i];
+
+    SmallVector<const SCEV *, 8> AddOps;
+    CollectSubexprs(BaseReg, 0, AddOps, SE);
+    if (AddOps.size() == 1) continue;
+
+    for (SmallVectorImpl<const SCEV *>::const_iterator J = AddOps.begin(),
+         JE = AddOps.end(); J != JE; ++J) {
+      // Don't pull a constant into a register if the constant could be folded
+      // into an immediate field.
+      if (isAlwaysFoldable(*J, LU.MinOffset, LU.MaxOffset,
+                           Base.getNumRegs() > 1,
+                           LU.Kind, LU.AccessTy, TLI, SE))
+        continue;
 
-  Value *CondOp0 = Cond->getOperand(0);
-  const SCEV *IV = SE->getSCEV(CondOp0);
-  const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(IV);
-  if (!AR || !AR->isAffine())
-    return false;
+      // Collect all operands except *J.
+      SmallVector<const SCEV *, 8> InnerAddOps;
+      for (SmallVectorImpl<const SCEV *>::const_iterator K = AddOps.begin(),
+           KE = AddOps.end(); K != KE; ++K)
+        if (K != J)
+          InnerAddOps.push_back(*K);
+
+      // Don't leave just a constant behind in a register if the constant could
+      // be folded into an immediate field.
+      if (InnerAddOps.size() == 1 &&
+          isAlwaysFoldable(InnerAddOps[0], LU.MinOffset, LU.MaxOffset,
+                           Base.getNumRegs() > 1,
+                           LU.Kind, LU.AccessTy, TLI, SE))
+        continue;
 
-  const SCEVConstant *SC = dyn_cast<SCEVConstant>(AR->getStepRecurrence(*SE));
-  if (!SC || SC->getValue()->getSExtValue() < 0)
-    // If it's already counting down, don't do anything.
-    return false;
+      Formula F = Base;
+      F.BaseRegs[i] = SE.getAddExpr(InnerAddOps);
+      F.BaseRegs.push_back(*J);
+      if (InsertFormula(LU, LUIdx, F))
+        // If that formula hadn't been seen before, recurse to find more like
+        // it.
+        GenerateReassociations(LU, LUIdx, LU.Formulae.back(), Depth+1);
+    }
+  }
+}
 
-  // If the RHS of the comparison is not an loop invariant, the rewrite
-  // cannot be done. Also bail out if it's already comparing against a zero.
-  // If we are checking this before cmp stride optimization, check if it's
-  // comparing against a already legal immediate.
-  Value *RHS = Cond->getOperand(1);
-  ConstantInt *RHSC = dyn_cast<ConstantInt>(RHS);
-  if (!L->isLoopInvariant(RHS) ||
-      (RHSC && RHSC->isZero()) ||
-      (RHSC && TLI && TLI->isLegalICmpImmediate(RHSC->getSExtValue())))
-    return false;
+/// GenerateCombinations - Generate a formula consisting of all of the
+/// loop-dominating registers added into a single register.
+void LSRInstance::GenerateCombinations(LSRUse &LU, unsigned LUIdx,
+                                       Formula Base) {
+  // This method is only intersting on a plurality of registers.
+  if (Base.BaseRegs.size() <= 1) return;
+
+  Formula F = Base;
+  F.BaseRegs.clear();
+  SmallVector<const SCEV *, 4> Ops;
+  for (SmallVectorImpl<const SCEV *>::const_iterator
+       I = Base.BaseRegs.begin(), E = Base.BaseRegs.end(); I != E; ++I) {
+    const SCEV *BaseReg = *I;
+    if (BaseReg->properlyDominates(L->getHeader(), &DT) &&
+        !BaseReg->hasComputableLoopEvolution(L))
+      Ops.push_back(BaseReg);
+    else
+      F.BaseRegs.push_back(BaseReg);
+  }
+  if (Ops.size() > 1) {
+    const SCEV *Sum = SE.getAddExpr(Ops);
+    // TODO: If Sum is zero, it probably means ScalarEvolution missed an
+    // opportunity to fold something. For now, just ignore such cases
+    // rather than procede with zero in a register.
+    if (!Sum->isZero()) {
+      F.BaseRegs.push_back(Sum);
+      (void)InsertFormula(LU, LUIdx, F);
+    }
+  }
+}
 
-  // Make sure the IV is only used for counting.  Value may be preinc or
-  // postinc; 2 uses in either case.
-  if (!CondOp0->hasNUses(2))
-    return false;
+/// GenerateSymbolicOffsets - Generate reuse formulae using symbolic offsets.
+void LSRInstance::GenerateSymbolicOffsets(LSRUse &LU, unsigned LUIdx,
+                                          Formula Base) {
+  // We can't add a symbolic offset if the address already contains one.
+  if (Base.AM.BaseGV) return;
 
-  return true;
+  for (size_t i = 0, e = Base.BaseRegs.size(); i != e; ++i) {
+    const SCEV *G = Base.BaseRegs[i];
+    GlobalValue *GV = ExtractSymbol(G, SE);
+    if (G->isZero() || !GV)
+      continue;
+    Formula F = Base;
+    F.AM.BaseGV = GV;
+    if (!isLegalUse(F.AM, LU.MinOffset, LU.MaxOffset,
+                    LU.Kind, LU.AccessTy, TLI))
+      continue;
+    F.BaseRegs[i] = G;
+    (void)InsertFormula(LU, LUIdx, F);
+  }
 }
 
-/// OptimizeLoopTermCond - Change loop terminating condition to use the
-/// postinc iv when possible.
-void LoopStrengthReduce::OptimizeLoopTermCond(Loop *L) {
-  BasicBlock *LatchBlock = L->getLoopLatch();
-  bool LatchExit = L->isLoopExiting(LatchBlock);
-  SmallVector<BasicBlock*, 8> ExitingBlocks;
-  L->getExitingBlocks(ExitingBlocks);
+/// GenerateConstantOffsets - Generate reuse formulae using symbolic offsets.
+void LSRInstance::GenerateConstantOffsets(LSRUse &LU, unsigned LUIdx,
+                                          Formula Base) {
+  // TODO: For now, just add the min and max offset, because it usually isn't
+  // worthwhile looking at everything inbetween.
+  SmallVector<int64_t, 4> Worklist;
+  Worklist.push_back(LU.MinOffset);
+  if (LU.MaxOffset != LU.MinOffset)
+    Worklist.push_back(LU.MaxOffset);
+
+  for (size_t i = 0, e = Base.BaseRegs.size(); i != e; ++i) {
+    const SCEV *G = Base.BaseRegs[i];
+
+    for (SmallVectorImpl<int64_t>::const_iterator I = Worklist.begin(),
+         E = Worklist.end(); I != E; ++I) {
+      Formula F = Base;
+      F.AM.BaseOffs = (uint64_t)Base.AM.BaseOffs - *I;
+      if (isLegalUse(F.AM, LU.MinOffset - *I, LU.MaxOffset - *I,
+                     LU.Kind, LU.AccessTy, TLI)) {
+        F.BaseRegs[i] = SE.getAddExpr(G, SE.getIntegerSCEV(*I, G->getType()));
+
+        (void)InsertFormula(LU, LUIdx, F);
+      }
+    }
 
-  for (unsigned i = 0, e = ExitingBlocks.size(); i != e; ++i) {
-    BasicBlock *ExitingBlock = ExitingBlocks[i];
+    int64_t Imm = ExtractImmediate(G, SE);
+    if (G->isZero() || Imm == 0)
+      continue;
+    Formula F = Base;
+    F.AM.BaseOffs = (uint64_t)F.AM.BaseOffs + Imm;
+    if (!isLegalUse(F.AM, LU.MinOffset, LU.MaxOffset,
+                    LU.Kind, LU.AccessTy, TLI))
+      continue;
+    F.BaseRegs[i] = G;
+    (void)InsertFormula(LU, LUIdx, F);
+  }
+}
 
-    // Finally, get the terminating condition for the loop if possible.  If we
-    // can, we want to change it to use a post-incremented version of its
-    // induction variable, to allow coalescing the live ranges for the IV into
-    // one register value.
+/// GenerateICmpZeroScales - For ICmpZero, check to see if we can scale up
+/// the comparison. For example, x == y -> x*c == y*c.
+void LSRInstance::GenerateICmpZeroScales(LSRUse &LU, unsigned LUIdx,
+                                         Formula Base) {
+  if (LU.Kind != LSRUse::ICmpZero) return;
 
-    BranchInst *TermBr = dyn_cast<BranchInst>(ExitingBlock->getTerminator());
-    if (!TermBr)
+  // Determine the integer type for the base formula.
+  const Type *IntTy = Base.getType();
+  if (!IntTy) return;
+  if (SE.getTypeSizeInBits(IntTy) > 64) return;
+
+  // Don't do this if there is more than one offset.
+  if (LU.MinOffset != LU.MaxOffset) return;
+
+  assert(!Base.AM.BaseGV && "ICmpZero use is not legal!");
+
+  // Check each interesting stride.
+  for (SmallSetVector<int64_t, 8>::const_iterator
+       I = Factors.begin(), E = Factors.end(); I != E; ++I) {
+    int64_t Factor = *I;
+    Formula F = Base;
+
+    // Check that the multiplication doesn't overflow.
+    F.AM.BaseOffs = (uint64_t)Base.AM.BaseOffs * Factor;
+    if ((int64_t)F.AM.BaseOffs / Factor != Base.AM.BaseOffs)
       continue;
-    // FIXME: Overly conservative, termination condition could be an 'or' etc..
-    if (TermBr->isUnconditional() || !isa<ICmpInst>(TermBr->getCondition()))
+
+    // Check that multiplying with the use offset doesn't overflow.
+    int64_t Offset = LU.MinOffset;
+    Offset = (uint64_t)Offset * Factor;
+    if ((int64_t)Offset / Factor != LU.MinOffset)
       continue;
 
-    // Search IVUsesByStride to find Cond's IVUse if there is one.
-    IVStrideUse *CondUse = 0;
-    const SCEV *CondStride = 0;
-    ICmpInst *Cond = cast<ICmpInst>(TermBr->getCondition());
-    if (!FindIVUserForCond(Cond, CondUse, CondStride))
+    // Check that this scale is legal.
+    if (!isLegalUse(F.AM, Offset, Offset, LU.Kind, LU.AccessTy, TLI))
       continue;
 
-    // If the latch block is exiting and it's not a single block loop, it's
-    // not safe to use postinc iv in other exiting blocks. FIXME: overly
-    // conservative? How about icmp stride optimization?
-    bool UsePostInc =  !(e > 1 && LatchExit && ExitingBlock != LatchBlock);
-    if (UsePostInc && ExitingBlock != LatchBlock) {
-      if (!Cond->hasOneUse())
-        // See below, we don't want the condition to be cloned.
-        UsePostInc = false;
-      else {
-        // If exiting block is the latch block, we know it's safe and profitable
-        // to transform the icmp to use post-inc iv. Otherwise do so only if it
-        // would not reuse another iv and its iv would be reused by other uses.
-        // We are optimizing for the case where the icmp is the only use of the
-        // iv.
-        IVUsersOfOneStride &StrideUses = *IU->IVUsesByStride[CondStride];
-        for (ilist<IVStrideUse>::iterator I = StrideUses.Users.begin(),
-               E = StrideUses.Users.end(); I != E; ++I) {
-          if (I->getUser() == Cond)
-            continue;
-          if (!I->isUseOfPostIncrementedValue()) {
-            UsePostInc = false;
-            break;
-          }
+    // Compensate for the use having MinOffset built into it.
+    F.AM.BaseOffs = (uint64_t)F.AM.BaseOffs + Offset - LU.MinOffset;
+
+    const SCEV *FactorS = SE.getIntegerSCEV(Factor, IntTy);
+
+    // Check that multiplying with each base register doesn't overflow.
+    for (size_t i = 0, e = F.BaseRegs.size(); i != e; ++i) {
+      F.BaseRegs[i] = SE.getMulExpr(F.BaseRegs[i], FactorS);
+      if (getSDiv(F.BaseRegs[i], FactorS, SE) != Base.BaseRegs[i])
+        goto next;
+    }
+
+    // Check that multiplying with the scaled register doesn't overflow.
+    if (F.ScaledReg) {
+      F.ScaledReg = SE.getMulExpr(F.ScaledReg, FactorS);
+      if (getSDiv(F.ScaledReg, FactorS, SE) != Base.ScaledReg)
+        continue;
+    }
+
+    // If we make it here and it's legal, add it.
+    (void)InsertFormula(LU, LUIdx, F);
+  next:;
+  }
+}
+
+/// GenerateScales - Generate stride factor reuse formulae by making use of
+/// scaled-offset address modes, for example.
+void LSRInstance::GenerateScales(LSRUse &LU, unsigned LUIdx,
+                                 Formula Base) {
+  // Determine the integer type for the base formula.
+  const Type *IntTy = Base.getType();
+  if (!IntTy) return;
+
+  // If this Formula already has a scaled register, we can't add another one.
+  if (Base.AM.Scale != 0) return;
+
+  // Check each interesting stride.
+  for (SmallSetVector<int64_t, 8>::const_iterator
+       I = Factors.begin(), E = Factors.end(); I != E; ++I) {
+    int64_t Factor = *I;
+
+    Base.AM.Scale = Factor;
+    Base.AM.HasBaseReg = Base.BaseRegs.size() > 1;
+    // Check whether this scale is going to be legal.
+    if (!isLegalUse(Base.AM, LU.MinOffset, LU.MaxOffset,
+                    LU.Kind, LU.AccessTy, TLI)) {
+      // As a special-case, handle special out-of-loop Basic users specially.
+      // TODO: Reconsider this special case.
+      if (LU.Kind == LSRUse::Basic &&
+          isLegalUse(Base.AM, LU.MinOffset, LU.MaxOffset,
+                     LSRUse::Special, LU.AccessTy, TLI) &&
+          LU.AllFixupsOutsideLoop)
+        LU.Kind = LSRUse::Special;
+      else
+        continue;
+    }
+    // For an ICmpZero, negating a solitary base register won't lead to
+    // new solutions.
+    if (LU.Kind == LSRUse::ICmpZero &&
+        !Base.AM.HasBaseReg && Base.AM.BaseOffs == 0 && !Base.AM.BaseGV)
+      continue;
+    // For each addrec base reg, apply the scale, if possible.
+    for (size_t i = 0, e = Base.BaseRegs.size(); i != e; ++i)
+      if (const SCEVAddRecExpr *AR =
+            dyn_cast<SCEVAddRecExpr>(Base.BaseRegs[i])) {
+        const SCEV *FactorS = SE.getIntegerSCEV(Factor, IntTy);
+        if (FactorS->isZero())
+          continue;
+        // Divide out the factor, ignoring high bits, since we'll be
+        // scaling the value back up in the end.
+        if (const SCEV *Quotient = getSDiv(AR, FactorS, SE, true)) {
+          // TODO: This could be optimized to avoid all the copying.
+          Formula F = Base;
+          F.ScaledReg = Quotient;
+          std::swap(F.BaseRegs[i], F.BaseRegs.back());
+          F.BaseRegs.pop_back();
+          (void)InsertFormula(LU, LUIdx, F);
         }
       }
+  }
+}
 
-      // If iv for the stride might be shared and any of the users use pre-inc
-      // iv might be used, then it's not safe to use post-inc iv.
-      if (UsePostInc &&
-          isa<SCEVConstant>(CondStride) &&
-          StrideMightBeShared(CondStride, L, true))
-        UsePostInc = false;
-    }
+/// GenerateTruncates - Generate reuse formulae from different IV types.
+void LSRInstance::GenerateTruncates(LSRUse &LU, unsigned LUIdx,
+                                    Formula Base) {
+  // This requires TargetLowering to tell us which truncates are free.
+  if (!TLI) return;
+
+  // Don't bother truncating symbolic values.
+  if (Base.AM.BaseGV) return;
+
+  // Determine the integer type for the base formula.
+  const Type *DstTy = Base.getType();
+  if (!DstTy) return;
+  DstTy = SE.getEffectiveSCEVType(DstTy);
+
+  for (SmallSetVector<const Type *, 4>::const_iterator
+       I = Types.begin(), E = Types.end(); I != E; ++I) {
+    const Type *SrcTy = *I;
+    if (SrcTy != DstTy && TLI->isTruncateFree(SrcTy, DstTy)) {
+      Formula F = Base;
+
+      if (F.ScaledReg) F.ScaledReg = SE.getAnyExtendExpr(F.ScaledReg, *I);
+      for (SmallVectorImpl<const SCEV *>::iterator J = F.BaseRegs.begin(),
+           JE = F.BaseRegs.end(); J != JE; ++J)
+        *J = SE.getAnyExtendExpr(*J, SrcTy);
+
+      // TODO: This assumes we've done basic processing on all uses and
+      // have an idea what the register usage is.
+      if (!F.hasRegsUsedByUsesOtherThan(LUIdx, RegUses))
+        continue;
 
-    // If the trip count is computed in terms of a max (due to ScalarEvolution
-    // being unable to find a sufficient guard, for example), change the loop
-    // comparison to use SLT or ULT instead of NE.
-    Cond = OptimizeMax(L, Cond, CondUse);
-
-    // If possible, change stride and operands of the compare instruction to
-    // eliminate one stride. However, avoid rewriting the compare instruction
-    // with an iv of new stride if it's likely the new stride uses will be
-    // rewritten using the stride of the compare instruction.
-    if (ExitingBlock == LatchBlock && isa<SCEVConstant>(CondStride)) {
-      // If the condition stride is a constant and it's the only use, we might
-      // want to optimize it first by turning it to count toward zero.
-      if (!StrideMightBeShared(CondStride, L, false) &&
-          !ShouldCountToZero(Cond, CondUse, SE, L, TLI))
-        Cond = ChangeCompareStride(L, Cond, CondUse, CondStride);
+      (void)InsertFormula(LU, LUIdx, F);
     }
+  }
+}
+
+namespace {
+
+/// WorkItem - Helper class for GenerateCrossUseConstantOffsets. It's used to
+/// defer modifications so that the search phase doesn't have to worry about
+/// the data structures moving underneath it.
+struct WorkItem {
+  size_t LUIdx;
+  int64_t Imm;
+  const SCEV *OrigReg;
+
+  WorkItem(size_t LI, int64_t I, const SCEV *R)
+    : LUIdx(LI), Imm(I), OrigReg(R) {}
 
-    if (!UsePostInc)
+  void print(raw_ostream &OS) const;
+  void dump() const;
+};
+
+}
+
+void WorkItem::print(raw_ostream &OS) const {
+  OS << "in formulae referencing " << *OrigReg << " in use " << LUIdx
+     << " , add offset " << Imm;
+}
+
+void WorkItem::dump() const {
+  print(errs()); errs() << '\n';
+}
+
+/// GenerateCrossUseConstantOffsets - Look for registers which are a constant
+/// distance apart and try to form reuse opportunities between them.
+void LSRInstance::GenerateCrossUseConstantOffsets() {
+  // Group the registers by their value without any added constant offset.
+  typedef std::map<int64_t, const SCEV *> ImmMapTy;
+  typedef DenseMap<const SCEV *, ImmMapTy> RegMapTy;
+  RegMapTy Map;
+  DenseMap<const SCEV *, SmallBitVector> UsedByIndicesMap;
+  SmallVector<const SCEV *, 8> Sequence;
+  for (RegUseTracker::const_iterator I = RegUses.begin(), E = RegUses.end();
+       I != E; ++I) {
+    const SCEV *Reg = *I;
+    int64_t Imm = ExtractImmediate(Reg, SE);
+    std::pair<RegMapTy::iterator, bool> Pair =
+      Map.insert(std::make_pair(Reg, ImmMapTy()));
+    if (Pair.second)
+      Sequence.push_back(Reg);
+    Pair.first->second.insert(std::make_pair(Imm, *I));
+    UsedByIndicesMap[Reg] |= RegUses.getUsedByIndices(*I);
+  }
+
+  // Now examine each set of registers with the same base value. Build up
+  // a list of work to do and do the work in a separate step so that we're
+  // not adding formulae and register counts while we're searching.
+  SmallVector<WorkItem, 32> WorkItems;
+  SmallSet<std::pair<size_t, int64_t>, 32> UniqueItems;
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = Sequence.begin(),
+       E = Sequence.end(); I != E; ++I) {
+    const SCEV *Reg = *I;
+    const ImmMapTy &Imms = Map.find(Reg)->second;
+
+    // It's not worthwhile looking for reuse if there's only one offset.
+    if (Imms.size() == 1)
       continue;
 
-    DEBUG(dbgs() << "  Change loop exiting icmp to use postinc iv: "
-          << *Cond << '\n');
+    DEBUG(dbgs() << "Generating cross-use offsets for " << *Reg << ':';
+          for (ImmMapTy::const_iterator J = Imms.begin(), JE = Imms.end();
+               J != JE; ++J)
+            dbgs() << ' ' << J->first;
+          dbgs() << '\n');
 
-    // It's possible for the setcc instruction to be anywhere in the loop, and
-    // possible for it to have multiple users.  If it is not immediately before
-    // the exiting block branch, move it.
-    if (&*++BasicBlock::iterator(Cond) != (Instruction*)TermBr) {
-      if (Cond->hasOneUse()) {   // Condition has a single use, just move it.
-        Cond->moveBefore(TermBr);
-      } else {
-        // Otherwise, clone the terminating condition and insert into the
-        // loopend.
-        Cond = cast<ICmpInst>(Cond->clone());
-        Cond->setName(L->getHeader()->getName() + ".termcond");
-        ExitingBlock->getInstList().insert(TermBr, Cond);
+    // Examine each offset.
+    for (ImmMapTy::const_iterator J = Imms.begin(), JE = Imms.end();
+         J != JE; ++J) {
+      const SCEV *OrigReg = J->second;
 
-        // Clone the IVUse, as the old use still exists!
-        IU->IVUsesByStride[CondStride]->addUser(CondUse->getOffset(), Cond,
-                                             CondUse->getOperandValToReplace());
-        CondUse = &IU->IVUsesByStride[CondStride]->Users.back();
+      int64_t JImm = J->first;
+      const SmallBitVector &UsedByIndices = RegUses.getUsedByIndices(OrigReg);
+
+      if (!isa<SCEVConstant>(OrigReg) &&
+          UsedByIndicesMap[Reg].count() == 1) {
+        DEBUG(dbgs() << "Skipping cross-use reuse for " << *OrigReg << '\n');
+        continue;
+      }
+
+      // Conservatively examine offsets between this orig reg a few selected
+      // other orig regs.
+      ImmMapTy::const_iterator OtherImms[] = {
+        Imms.begin(), prior(Imms.end()),
+        Imms.upper_bound((Imms.begin()->first + prior(Imms.end())->first) / 2)
+      };
+      for (size_t i = 0, e = array_lengthof(OtherImms); i != e; ++i) {
+        ImmMapTy::const_iterator M = OtherImms[i];
+        if (M == J || M == JE) continue;
+
+        // Compute the difference between the two.
+        int64_t Imm = (uint64_t)JImm - M->first;
+        for (int LUIdx = UsedByIndices.find_first(); LUIdx != -1;
+             LUIdx = UsedByIndices.find_next(LUIdx))
+          // Make a memo of this use, offset, and register tuple.
+          if (UniqueItems.insert(std::make_pair(LUIdx, Imm)))
+            WorkItems.push_back(WorkItem(LUIdx, Imm, OrigReg));
       }
     }
+  }
 
-    // If we get to here, we know that we can transform the setcc instruction to
-    // use the post-incremented version of the IV, allowing us to coalesce the
-    // live ranges for the IV correctly.
-    CondUse->setOffset(SE->getMinusSCEV(CondUse->getOffset(), CondStride));
-    CondUse->setIsUseOfPostIncrementedValue(true);
-    Changed = true;
+  Map.clear();
+  Sequence.clear();
+  UsedByIndicesMap.clear();
+  UniqueItems.clear();
+
+  // Now iterate through the worklist and add new formulae.
+  for (SmallVectorImpl<WorkItem>::const_iterator I = WorkItems.begin(),
+       E = WorkItems.end(); I != E; ++I) {
+    const WorkItem &WI = *I;
+    size_t LUIdx = WI.LUIdx;
+    LSRUse &LU = Uses[LUIdx];
+    int64_t Imm = WI.Imm;
+    const SCEV *OrigReg = WI.OrigReg;
+
+    const Type *IntTy = SE.getEffectiveSCEVType(OrigReg->getType());
+    const SCEV *NegImmS = SE.getSCEV(ConstantInt::get(IntTy, -(uint64_t)Imm));
+    unsigned BitWidth = SE.getTypeSizeInBits(IntTy);
+
+    // TODO: Use a more targetted data structure.
+    for (size_t L = 0, LE = LU.Formulae.size(); L != LE; ++L) {
+      Formula F = LU.Formulae[L];
+      // Use the immediate in the scaled register.
+      if (F.ScaledReg == OrigReg) {
+        int64_t Offs = (uint64_t)F.AM.BaseOffs +
+                       Imm * (uint64_t)F.AM.Scale;
+        // Don't create 50 + reg(-50).
+        if (F.referencesReg(SE.getSCEV(
+                   ConstantInt::get(IntTy, -(uint64_t)Offs))))
+          continue;
+        Formula NewF = F;
+        NewF.AM.BaseOffs = Offs;
+        if (!isLegalUse(NewF.AM, LU.MinOffset, LU.MaxOffset,
+                        LU.Kind, LU.AccessTy, TLI))
+          continue;
+        NewF.ScaledReg = SE.getAddExpr(NegImmS, NewF.ScaledReg);
+
+        // If the new scale is a constant in a register, and adding the constant
+        // value to the immediate would produce a value closer to zero than the
+        // immediate itself, then the formula isn't worthwhile.
+        if (const SCEVConstant *C = dyn_cast<SCEVConstant>(NewF.ScaledReg))
+          if (C->getValue()->getValue().isNegative() !=
+                (NewF.AM.BaseOffs < 0) &&
+              (C->getValue()->getValue().abs() * APInt(BitWidth, F.AM.Scale))
+                .ule(APInt(BitWidth, NewF.AM.BaseOffs).abs()))
+            continue;
 
-    ++NumLoopCond;
+        // OK, looks good.
+        (void)InsertFormula(LU, LUIdx, NewF);
+      } else {
+        // Use the immediate in a base register.
+        for (size_t N = 0, NE = F.BaseRegs.size(); N != NE; ++N) {
+          const SCEV *BaseReg = F.BaseRegs[N];
+          if (BaseReg != OrigReg)
+            continue;
+          Formula NewF = F;
+          NewF.AM.BaseOffs = (uint64_t)NewF.AM.BaseOffs + Imm;
+          if (!isLegalUse(NewF.AM, LU.MinOffset, LU.MaxOffset,
+                          LU.Kind, LU.AccessTy, TLI))
+            continue;
+          NewF.BaseRegs[N] = SE.getAddExpr(NegImmS, BaseReg);
+
+          // If the new formula has a constant in a register, and adding the
+          // constant value to the immediate would produce a value closer to
+          // zero than the immediate itself, then the formula isn't worthwhile.
+          for (SmallVectorImpl<const SCEV *>::const_iterator
+               J = NewF.BaseRegs.begin(), JE = NewF.BaseRegs.end();
+               J != JE; ++J)
+            if (const SCEVConstant *C = dyn_cast<SCEVConstant>(*J))
+              if (C->getValue()->getValue().isNegative() !=
+                    (NewF.AM.BaseOffs < 0) &&
+                  C->getValue()->getValue().abs()
+                    .ule(APInt(BitWidth, NewF.AM.BaseOffs).abs()))
+                goto skip_formula;
+
+          // Ok, looks good.
+          (void)InsertFormula(LU, LUIdx, NewF);
+          break;
+        skip_formula:;
+        }
+      }
+    }
   }
 }
 
-bool LoopStrengthReduce::OptimizeLoopCountIVOfStride(const SCEV* &Stride,
-                                                     IVStrideUse* &CondUse,
-                                                     Loop *L) {
-  // If the only use is an icmp of a loop exiting conditional branch, then
-  // attempt the optimization.
-  BasedUser User = BasedUser(*CondUse, SE);
-  assert(isa<ICmpInst>(User.Inst) && "Expecting an ICMPInst!");
-  ICmpInst *Cond = cast<ICmpInst>(User.Inst);
+/// GenerateAllReuseFormulae - Generate formulae for each use.
+void
+LSRInstance::GenerateAllReuseFormulae() {
+  // This is split into two loops so that hasRegsUsedByUsesOtherThan
+  // queries are more precise.
+  for (size_t LUIdx = 0, NumUses = Uses.size(); LUIdx != NumUses; ++LUIdx) {
+    LSRUse &LU = Uses[LUIdx];
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateReassociations(LU, LUIdx, LU.Formulae[i]);
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateCombinations(LU, LUIdx, LU.Formulae[i]);
+  }
+  for (size_t LUIdx = 0, NumUses = Uses.size(); LUIdx != NumUses; ++LUIdx) {
+    LSRUse &LU = Uses[LUIdx];
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateSymbolicOffsets(LU, LUIdx, LU.Formulae[i]);
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateConstantOffsets(LU, LUIdx, LU.Formulae[i]);
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateICmpZeroScales(LU, LUIdx, LU.Formulae[i]);
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateScales(LU, LUIdx, LU.Formulae[i]);
+    for (size_t i = 0, f = LU.Formulae.size(); i != f; ++i)
+      GenerateTruncates(LU, LUIdx, LU.Formulae[i]);
+  }
 
-  // Less strict check now that compare stride optimization is done.
-  if (!ShouldCountToZero(Cond, CondUse, SE, L))
-    return false;
+  GenerateCrossUseConstantOffsets();
+}
 
-  Value *CondOp0 = Cond->getOperand(0);
-  PHINode *PHIExpr = dyn_cast<PHINode>(CondOp0);
-  Instruction *Incr;
-  if (!PHIExpr) {
-    // Value tested is postinc. Find the phi node.
-    Incr = dyn_cast<BinaryOperator>(CondOp0);
-    // FIXME: Just use User.OperandValToReplace here?
-    if (!Incr || Incr->getOpcode() != Instruction::Add)
-      return false;
+/// If their are multiple formulae with the same set of registers used
+/// by other uses, pick the best one and delete the others.
+void LSRInstance::FilterOutUndesirableDedicatedRegisters() {
+#ifndef NDEBUG
+  bool Changed = false;
+#endif
+
+  // Collect the best formula for each unique set of shared registers. This
+  // is reset for each use.
+  typedef DenseMap<SmallVector<const SCEV *, 2>, size_t, UniquifierDenseMapInfo>
+    BestFormulaeTy;
+  BestFormulaeTy BestFormulae;
+
+  for (size_t LUIdx = 0, NumUses = Uses.size(); LUIdx != NumUses; ++LUIdx) {
+    LSRUse &LU = Uses[LUIdx];
+    FormulaSorter Sorter(L, LU, SE, DT);
+
+    // Clear out the set of used regs; it will be recomputed.
+    LU.Regs.clear();
+
+    for (size_t FIdx = 0, NumForms = LU.Formulae.size();
+         FIdx != NumForms; ++FIdx) {
+      Formula &F = LU.Formulae[FIdx];
+
+      SmallVector<const SCEV *, 2> Key;
+      for (SmallVectorImpl<const SCEV *>::const_iterator J = F.BaseRegs.begin(),
+           JE = F.BaseRegs.end(); J != JE; ++J) {
+        const SCEV *Reg = *J;
+        if (RegUses.isRegUsedByUsesOtherThan(Reg, LUIdx))
+          Key.push_back(Reg);
+      }
+      if (F.ScaledReg &&
+          RegUses.isRegUsedByUsesOtherThan(F.ScaledReg, LUIdx))
+        Key.push_back(F.ScaledReg);
+      // Unstable sort by host order ok, because this is only used for
+      // uniquifying.
+      std::sort(Key.begin(), Key.end());
+
+      std::pair<BestFormulaeTy::const_iterator, bool> P =
+        BestFormulae.insert(std::make_pair(Key, FIdx));
+      if (!P.second) {
+        Formula &Best = LU.Formulae[P.first->second];
+        if (Sorter.operator()(F, Best))
+          std::swap(F, Best);
+        DEBUG(dbgs() << "Filtering out "; F.print(dbgs());
+              dbgs() << "\n"
+                        "  in favor of "; Best.print(dbgs());
+              dbgs() << '\n');
+#ifndef NDEBUG
+        Changed = true;
+#endif
+        std::swap(F, LU.Formulae.back());
+        LU.Formulae.pop_back();
+        --FIdx;
+        --NumForms;
+        continue;
+      }
+      if (F.ScaledReg) LU.Regs.insert(F.ScaledReg);
+      LU.Regs.insert(F.BaseRegs.begin(), F.BaseRegs.end());
+    }
+    BestFormulae.clear();
+  }
 
-    PHIExpr = dyn_cast<PHINode>(Incr->getOperand(0));
-    if (!PHIExpr)
-      return false;
-    // 1 use for preinc value, the increment.
-    if (!PHIExpr->hasOneUse())
-      return false;
-  } else {
-    assert(isa<PHINode>(CondOp0) &&
-           "Unexpected loop exiting counting instruction sequence!");
-    PHIExpr = cast<PHINode>(CondOp0);
-    // Value tested is preinc.  Find the increment.
-    // A CmpInst is not a BinaryOperator; we depend on this.
-    Instruction::use_iterator UI = PHIExpr->use_begin();
-    Incr = dyn_cast<BinaryOperator>(UI);
-    if (!Incr)
-      Incr = dyn_cast<BinaryOperator>(++UI);
-    // One use for postinc value, the phi.  Unnecessarily conservative?
-    if (!Incr || !Incr->hasOneUse() || Incr->getOpcode() != Instruction::Add)
-      return false;
+  DEBUG(if (Changed) {
+          dbgs() << "\n"
+                    "After filtering out undesirable candidates:\n";
+          print_uses(dbgs());
+        });
+}
+
+/// NarrowSearchSpaceUsingHeuristics - If there are an extrordinary number of
+/// formulae to choose from, use some rough heuristics to prune down the number
+/// of formulae. This keeps the main solver from taking an extrordinary amount
+/// of time in some worst-case scenarios.
+void LSRInstance::NarrowSearchSpaceUsingHeuristics() {
+  // This is a rough guess that seems to work fairly well.
+  const size_t Limit = UINT16_MAX;
+
+  SmallPtrSet<const SCEV *, 4> Taken;
+  for (;;) {
+    // Estimate the worst-case number of solutions we might consider. We almost
+    // never consider this many solutions because we prune the search space,
+    // but the pruning isn't always sufficient.
+    uint32_t Power = 1;
+    for (SmallVectorImpl<LSRUse>::const_iterator I = Uses.begin(),
+         E = Uses.end(); I != E; ++I) {
+      size_t FSize = I->Formulae.size();
+      if (FSize >= Limit) {
+        Power = Limit;
+        break;
+      }
+      Power *= FSize;
+      if (Power >= Limit)
+        break;
+    }
+    if (Power < Limit)
+      break;
+
+    // Ok, we have too many of formulae on our hands to conveniently handle.
+    // Use a rough heuristic to thin out the list.
+
+    // Pick the register which is used by the most LSRUses, which is likely
+    // to be a good reuse register candidate.
+    const SCEV *Best = 0;
+    unsigned BestNum = 0;
+    for (RegUseTracker::const_iterator I = RegUses.begin(), E = RegUses.end();
+         I != E; ++I) {
+      const SCEV *Reg = *I;
+      if (Taken.count(Reg))
+        continue;
+      if (!Best)
+        Best = Reg;
+      else {
+        unsigned Count = RegUses.getUsedByIndices(Reg).count();
+        if (Count > BestNum) {
+          Best = Reg;
+          BestNum = Count;
+        }
+      }
+    }
+
+    DEBUG(dbgs() << "Narrowing the search space by assuming " << *Best
+                 << " will yeild profitable reuse.\n");
+    Taken.insert(Best);
+
+    // In any use with formulae which references this register, delete formulae
+    // which don't reference it.
+    for (SmallVectorImpl<LSRUse>::iterator I = Uses.begin(),
+         E = Uses.end(); I != E; ++I) {
+      LSRUse &LU = *I;
+      if (!LU.Regs.count(Best)) continue;
+
+      // Clear out the set of used regs; it will be recomputed.
+      LU.Regs.clear();
+
+      for (size_t i = 0, e = LU.Formulae.size(); i != e; ++i) {
+        Formula &F = LU.Formulae[i];
+        if (!F.referencesReg(Best)) {
+          DEBUG(dbgs() << "  Deleting "; F.print(dbgs()); dbgs() << '\n');
+          std::swap(LU.Formulae.back(), F);
+          LU.Formulae.pop_back();
+          --e;
+          --i;
+          continue;
+        }
+
+        if (F.ScaledReg) LU.Regs.insert(F.ScaledReg);
+        LU.Regs.insert(F.BaseRegs.begin(), F.BaseRegs.end());
+      }
+    }
+
+    DEBUG(dbgs() << "After pre-selection:\n";
+          print_uses(dbgs()));
+  }
+}
+
+/// SolveRecurse - This is the recursive solver.
+void LSRInstance::SolveRecurse(SmallVectorImpl<const Formula *> &Solution,
+                               Cost &SolutionCost,
+                               SmallVectorImpl<const Formula *> &Workspace,
+                               const Cost &CurCost,
+                               const SmallPtrSet<const SCEV *, 16> &CurRegs,
+                               DenseSet<const SCEV *> &VisitedRegs) const {
+  // Some ideas:
+  //  - prune more:
+  //    - use more aggressive filtering
+  //    - sort the formula so that the most profitable solutions are found first
+  //    - sort the uses too
+  //  - search faster:
+  //    - dont compute a cost, and then compare. compare while computing a cost
+  //      and bail early.
+  //    - track register sets with SmallBitVector
+
+  const LSRUse &LU = Uses[Workspace.size()];
+
+  // If this use references any register that's already a part of the
+  // in-progress solution, consider it a requirement that a formula must
+  // reference that register in order to be considered. This prunes out
+  // unprofitable searching.
+  SmallSetVector<const SCEV *, 4> ReqRegs;
+  for (SmallPtrSet<const SCEV *, 16>::const_iterator I = CurRegs.begin(),
+       E = CurRegs.end(); I != E; ++I)
+    if (LU.Regs.count(*I))
+      ReqRegs.insert(*I);
+
+  bool AnySatisfiedReqRegs = false;
+  SmallPtrSet<const SCEV *, 16> NewRegs;
+  Cost NewCost;
+retry:
+  for (SmallVectorImpl<Formula>::const_iterator I = LU.Formulae.begin(),
+       E = LU.Formulae.end(); I != E; ++I) {
+    const Formula &F = *I;
+
+    // Ignore formulae which do not use any of the required registers.
+    for (SmallSetVector<const SCEV *, 4>::const_iterator J = ReqRegs.begin(),
+         JE = ReqRegs.end(); J != JE; ++J) {
+      const SCEV *Reg = *J;
+      if ((!F.ScaledReg || F.ScaledReg != Reg) &&
+          std::find(F.BaseRegs.begin(), F.BaseRegs.end(), Reg) ==
+          F.BaseRegs.end())
+        goto skip;
+    }
+    AnySatisfiedReqRegs = true;
+
+    // Evaluate the cost of the current formula. If it's already worse than
+    // the current best, prune the search at that point.
+    NewCost = CurCost;
+    NewRegs = CurRegs;
+    NewCost.RateFormula(F, NewRegs, VisitedRegs, L, LU.Offsets, SE, DT);
+    if (NewCost < SolutionCost) {
+      Workspace.push_back(&F);
+      if (Workspace.size() != Uses.size()) {
+        SolveRecurse(Solution, SolutionCost, Workspace, NewCost,
+                     NewRegs, VisitedRegs);
+        if (F.getNumRegs() == 1 && Workspace.size() == 1)
+          VisitedRegs.insert(F.ScaledReg ? F.ScaledReg : F.BaseRegs[0]);
+      } else {
+        DEBUG(dbgs() << "New best at "; NewCost.print(dbgs());
+              dbgs() << ". Regs:";
+              for (SmallPtrSet<const SCEV *, 16>::const_iterator
+                   I = NewRegs.begin(), E = NewRegs.end(); I != E; ++I)
+                dbgs() << ' ' << **I;
+              dbgs() << '\n');
+
+        SolutionCost = NewCost;
+        Solution = Workspace;
+      }
+      Workspace.pop_back();
+    }
+  skip:;
+  }
+
+  // If none of the formulae had all of the required registers, relax the
+  // constraint so that we don't exclude all formulae.
+  if (!AnySatisfiedReqRegs) {
+    ReqRegs.clear();
+    goto retry;
   }
+}
+
+void LSRInstance::Solve(SmallVectorImpl<const Formula *> &Solution) const {
+  SmallVector<const Formula *, 8> Workspace;
+  Cost SolutionCost;
+  SolutionCost.Loose();
+  Cost CurCost;
+  SmallPtrSet<const SCEV *, 16> CurRegs;
+  DenseSet<const SCEV *> VisitedRegs;
+  Workspace.reserve(Uses.size());
+
+  SolveRecurse(Solution, SolutionCost, Workspace, CurCost,
+               CurRegs, VisitedRegs);
+
+  // Ok, we've now made all our decisions.
+  DEBUG(dbgs() << "\n"
+                  "The chosen solution requires "; SolutionCost.print(dbgs());
+        dbgs() << ":\n";
+        for (size_t i = 0, e = Uses.size(); i != e; ++i) {
+          dbgs() << "  ";
+          Uses[i].print(dbgs());
+          dbgs() << "\n"
+                    "    ";
+          Solution[i]->print(dbgs());
+          dbgs() << '\n';
+        });
+}
+
+/// getImmediateDominator - A handy utility for the specific DominatorTree
+/// query that we need here.
+///
+static BasicBlock *getImmediateDominator(BasicBlock *BB, DominatorTree &DT) {
+  DomTreeNode *Node = DT.getNode(BB);
+  if (!Node) return 0;
+  Node = Node->getIDom();
+  if (!Node) return 0;
+  return Node->getBlock();
+}
 
-  // Replace the increment with a decrement.
-  DEBUG(dbgs() << "LSR: Examining use ");
-  DEBUG(WriteAsOperand(dbgs(), CondOp0, /*PrintType=*/false));
-  DEBUG(dbgs() << " in Inst: " << *Cond << '\n');
-  BinaryOperator *Decr =  BinaryOperator::Create(Instruction::Sub,
-                         Incr->getOperand(0), Incr->getOperand(1), "tmp", Incr);
-  Incr->replaceAllUsesWith(Decr);
-  Incr->eraseFromParent();
-
-  // Substitute endval-startval for the original startval, and 0 for the
-  // original endval.  Since we're only testing for equality this is OK even
-  // if the computation wraps around.
-  BasicBlock  *Preheader = L->getLoopPreheader();
-  Instruction *PreInsertPt = Preheader->getTerminator();
-  unsigned InBlock = L->contains(PHIExpr->getIncomingBlock(0)) ? 1 : 0;
-  Value *StartVal = PHIExpr->getIncomingValue(InBlock);
-  Value *EndVal = Cond->getOperand(1);
-  DEBUG(dbgs() << "    Optimize loop counting iv to count down ["
-        << *EndVal << " .. " << *StartVal << "]\n");
-
-  // FIXME: check for case where both are constant.
-  Constant* Zero = ConstantInt::get(Cond->getOperand(1)->getType(), 0);
-  BinaryOperator *NewStartVal = BinaryOperator::Create(Instruction::Sub,
-                                          EndVal, StartVal, "tmp", PreInsertPt);
-  PHIExpr->setIncomingValue(InBlock, NewStartVal);
-  Cond->setOperand(1, Zero);
-  DEBUG(dbgs() << "    New icmp: " << *Cond << "\n");
-
-  int64_t SInt = cast<SCEVConstant>(Stride)->getValue()->getSExtValue();
-  const SCEV *NewStride = 0;
-  bool Found = false;
-  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-    const SCEV *OldStride = IU->StrideOrder[i];
-    if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(OldStride))
-      if (SC->getValue()->getSExtValue() == -SInt) {
-        Found = true;
-        NewStride = OldStride;
+Value *LSRInstance::Expand(const LSRFixup &LF,
+                           const Formula &F,
+                           BasicBlock::iterator IP,
+                           Loop *L, Instruction *IVIncInsertPos,
+                           SCEVExpander &Rewriter,
+                           SmallVectorImpl<WeakVH> &DeadInsts,
+                           ScalarEvolution &SE, DominatorTree &DT) const {
+  const LSRUse &LU = Uses[LF.LUIdx];
+
+  // Then, collect some instructions which we will remain dominated by when
+  // expanding the replacement. These must be dominated by any operands that
+  // will be required in the expansion.
+  SmallVector<Instruction *, 4> Inputs;
+  if (Instruction *I = dyn_cast<Instruction>(LF.OperandValToReplace))
+    Inputs.push_back(I);
+  if (LU.Kind == LSRUse::ICmpZero)
+    if (Instruction *I =
+          dyn_cast<Instruction>(cast<ICmpInst>(LF.UserInst)->getOperand(1)))
+      Inputs.push_back(I);
+  if (LF.PostIncLoop && !L->contains(LF.UserInst))
+    Inputs.push_back(L->getLoopLatch()->getTerminator());
+
+  // Then, climb up the immediate dominator tree as far as we can go while
+  // still being dominated by the input positions.
+  for (;;) {
+    bool AllDominate = true;
+    Instruction *BetterPos = 0;
+    BasicBlock *IDom = getImmediateDominator(IP->getParent(), DT);
+    if (!IDom) break;
+    Instruction *Tentative = IDom->getTerminator();
+    for (SmallVectorImpl<Instruction *>::const_iterator I = Inputs.begin(),
+         E = Inputs.end(); I != E; ++I) {
+      Instruction *Inst = *I;
+      if (Inst == Tentative || !DT.dominates(Inst, Tentative)) {
+        AllDominate = false;
         break;
       }
+      if (IDom == Inst->getParent() &&
+          (!BetterPos || DT.dominates(BetterPos, Inst)))
+        BetterPos = next(BasicBlock::iterator(Inst));
+    }
+    if (!AllDominate)
+      break;
+    if (BetterPos)
+      IP = BetterPos;
+    else
+      IP = Tentative;
   }
+  while (isa<PHINode>(IP)) ++IP;
+
+  // Inform the Rewriter if we have a post-increment use, so that it can
+  // perform an advantageous expansion.
+  Rewriter.setPostInc(LF.PostIncLoop);
+
+  // This is the type that the user actually needs.
+  const Type *OpTy = LF.OperandValToReplace->getType();
+  // This will be the type that we'll initially expand to.
+  const Type *Ty = F.getType();
+  if (!Ty)
+    // No type known; just expand directly to the ultimate type.
+    Ty = OpTy;
+  else if (SE.getEffectiveSCEVType(Ty) == SE.getEffectiveSCEVType(OpTy))
+    // Expand directly to the ultimate type if it's the right size.
+    Ty = OpTy;
+  // This is the type to do integer arithmetic in.
+  const Type *IntTy = SE.getEffectiveSCEVType(Ty);
+
+  // Build up a list of operands to add together to form the full base.
+  SmallVector<const SCEV *, 8> Ops;
+
+  // Expand the BaseRegs portion.
+  for (SmallVectorImpl<const SCEV *>::const_iterator I = F.BaseRegs.begin(),
+       E = F.BaseRegs.end(); I != E; ++I) {
+    const SCEV *Reg = *I;
+    assert(!Reg->isZero() && "Zero allocated in a base register!");
+
+    // If we're expanding for a post-inc user for the add-rec's loop, make the
+    // post-inc adjustment.
+    const SCEV *Start = Reg;
+    while (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(Start)) {
+      if (AR->getLoop() == LF.PostIncLoop) {
+        Reg = SE.getAddExpr(Reg, AR->getStepRecurrence(SE));
+        // If the user is inside the loop, insert the code after the increment
+        // so that it is dominated by its operand.
+        if (L->contains(LF.UserInst))
+          IP = IVIncInsertPos;
+        break;
+      }
+      Start = AR->getStart();
+    }
 
-  if (!Found)
-    NewStride = SE->getIntegerSCEV(-SInt, Stride->getType());
-  IU->AddUser(NewStride, CondUse->getOffset(), Cond, Cond->getOperand(0));
-  IU->IVUsesByStride[Stride]->removeUser(CondUse);
+    Ops.push_back(SE.getUnknown(Rewriter.expandCodeFor(Reg, 0, IP)));
+  }
 
-  CondUse = &IU->IVUsesByStride[NewStride]->Users.back();
-  Stride = NewStride;
+  // Expand the ScaledReg portion.
+  Value *ICmpScaledV = 0;
+  if (F.AM.Scale != 0) {
+    const SCEV *ScaledS = F.ScaledReg;
+
+    // If we're expanding for a post-inc user for the add-rec's loop, make the
+    // post-inc adjustment.
+    if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(ScaledS))
+      if (AR->getLoop() == LF.PostIncLoop)
+        ScaledS = SE.getAddExpr(ScaledS, AR->getStepRecurrence(SE));
+
+    if (LU.Kind == LSRUse::ICmpZero) {
+      // An interesting way of "folding" with an icmp is to use a negated
+      // scale, which we'll implement by inserting it into the other operand
+      // of the icmp.
+      assert(F.AM.Scale == -1 &&
+             "The only scale supported by ICmpZero uses is -1!");
+      ICmpScaledV = Rewriter.expandCodeFor(ScaledS, 0, IP);
+    } else {
+      // Otherwise just expand the scaled register and an explicit scale,
+      // which is expected to be matched as part of the address.
+      ScaledS = SE.getUnknown(Rewriter.expandCodeFor(ScaledS, 0, IP));
+      ScaledS = SE.getMulExpr(ScaledS,
+                              SE.getIntegerSCEV(F.AM.Scale,
+                                                ScaledS->getType()));
+      Ops.push_back(ScaledS);
+    }
+  }
 
-  ++NumCountZero;
+  // Expand the immediate portions.
+  if (F.AM.BaseGV)
+    Ops.push_back(SE.getSCEV(F.AM.BaseGV));
+  int64_t Offset = (uint64_t)F.AM.BaseOffs + LF.Offset;
+  if (Offset != 0) {
+    if (LU.Kind == LSRUse::ICmpZero) {
+      // The other interesting way of "folding" with an ICmpZero is to use a
+      // negated immediate.
+      if (!ICmpScaledV)
+        ICmpScaledV = ConstantInt::get(IntTy, -Offset);
+      else {
+        Ops.push_back(SE.getUnknown(ICmpScaledV));
+        ICmpScaledV = ConstantInt::get(IntTy, Offset);
+      }
+    } else {
+      // Just add the immediate values. These again are expected to be matched
+      // as part of the address.
+      Ops.push_back(SE.getIntegerSCEV(Offset, IntTy));
+    }
+  }
 
-  return true;
+  // Emit instructions summing all the operands.
+  const SCEV *FullS = Ops.empty() ?
+                      SE.getIntegerSCEV(0, IntTy) :
+                      SE.getAddExpr(Ops);
+  Value *FullV = Rewriter.expandCodeFor(FullS, Ty, IP);
+
+  // We're done expanding now, so reset the rewriter.
+  Rewriter.setPostInc(0);
+
+  // An ICmpZero Formula represents an ICmp which we're handling as a
+  // comparison against zero. Now that we've expanded an expression for that
+  // form, update the ICmp's other operand.
+  if (LU.Kind == LSRUse::ICmpZero) {
+    ICmpInst *CI = cast<ICmpInst>(LF.UserInst);
+    DeadInsts.push_back(CI->getOperand(1));
+    assert(!F.AM.BaseGV && "ICmp does not support folding a global value and "
+                           "a scale at the same time!");
+    if (F.AM.Scale == -1) {
+      if (ICmpScaledV->getType() != OpTy) {
+        Instruction *Cast =
+          CastInst::Create(CastInst::getCastOpcode(ICmpScaledV, false,
+                                                   OpTy, false),
+                           ICmpScaledV, OpTy, "tmp", CI);
+        ICmpScaledV = Cast;
+      }
+      CI->setOperand(1, ICmpScaledV);
+    } else {
+      assert(F.AM.Scale == 0 &&
+             "ICmp does not support folding a global value and "
+             "a scale at the same time!");
+      Constant *C = ConstantInt::getSigned(SE.getEffectiveSCEVType(OpTy),
+                                           -(uint64_t)Offset);
+      if (C->getType() != OpTy)
+        C = ConstantExpr::getCast(CastInst::getCastOpcode(C, false,
+                                                          OpTy, false),
+                                  C, OpTy);
+
+      CI->setOperand(1, C);
+    }
+  }
+
+  return FullV;
 }
 
-/// OptimizeLoopCountIV - If, after all sharing of IVs, the IV used for deciding
-/// when to exit the loop is used only for that purpose, try to rearrange things
-/// so it counts down to a test against zero.
-bool LoopStrengthReduce::OptimizeLoopCountIV(Loop *L) {
-  bool ThisChanged = false;
-  for (unsigned i = 0, e = IU->StrideOrder.size(); i != e; ++i) {
-    const SCEV *Stride = IU->StrideOrder[i];
-    std::map<const SCEV *, IVUsersOfOneStride *>::iterator SI =
-      IU->IVUsesByStride.find(Stride);
-    assert(SI != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-    // FIXME: Generalize to non-affine IV's.
-    if (!SI->first->isLoopInvariant(L))
-      continue;
-    // If stride is a constant and it has an icmpinst use, check if we can
-    // optimize the loop to count down.
-    if (isa<SCEVConstant>(Stride) && SI->second->Users.size() == 1) {
-      Instruction *User = SI->second->Users.begin()->getUser();
-      if (!isa<ICmpInst>(User))
-        continue;
-      const SCEV *CondStride = Stride;
-      IVStrideUse *Use = &*SI->second->Users.begin();
-      if (!OptimizeLoopCountIVOfStride(CondStride, Use, L))
-        continue;
-      ThisChanged = true;
+/// Rewrite - Emit instructions for the leading candidate expression for this
+/// LSRUse (this is called "expanding"), and update the UserInst to reference
+/// the newly expanded value.
+void LSRInstance::Rewrite(const LSRFixup &LF,
+                          const Formula &F,
+                          Loop *L, Instruction *IVIncInsertPos,
+                          SCEVExpander &Rewriter,
+                          SmallVectorImpl<WeakVH> &DeadInsts,
+                          ScalarEvolution &SE, DominatorTree &DT,
+                          Pass *P) const {
+  const Type *OpTy = LF.OperandValToReplace->getType();
+
+  // First, find an insertion point that dominates UserInst. For PHI nodes,
+  // find the nearest block which dominates all the relevant uses.
+  if (PHINode *PN = dyn_cast<PHINode>(LF.UserInst)) {
+    DenseMap<BasicBlock *, Value *> Inserted;
+    for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i)
+      if (PN->getIncomingValue(i) == LF.OperandValToReplace) {
+        BasicBlock *BB = PN->getIncomingBlock(i);
 
-      // Now check if it's possible to reuse this iv for other stride uses.
-      for (unsigned j = 0, ee = IU->StrideOrder.size(); j != ee; ++j) {
-        const SCEV *SStride = IU->StrideOrder[j];
-        if (SStride == CondStride)
-          continue;
-        std::map<const SCEV *, IVUsersOfOneStride *>::iterator SII =
-          IU->IVUsesByStride.find(SStride);
-        assert(SII != IU->IVUsesByStride.end() && "Stride doesn't exist!");
-        // FIXME: Generalize to non-affine IV's.
-        if (!SII->first->isLoopInvariant(L))
-          continue;
-        // FIXME: Rewrite other stride using CondStride.
+        // If this is a critical edge, split the edge so that we do not insert
+        // the code on all predecessor/successor paths.  We do this unless this
+        // is the canonical backedge for this loop, which complicates post-inc
+        // users.
+        if (e != 1 && BB->getTerminator()->getNumSuccessors() > 1 &&
+            !isa<IndirectBrInst>(BB->getTerminator()) &&
+            (PN->getParent() != L->getHeader() || !L->contains(BB))) {
+          // Split the critical edge.
+          BasicBlock *NewBB = SplitCriticalEdge(BB, PN->getParent(), P);
+
+          // If PN is outside of the loop and BB is in the loop, we want to
+          // move the block to be immediately before the PHI block, not
+          // immediately after BB.
+          if (L->contains(BB) && !L->contains(PN))
+            NewBB->moveBefore(PN->getParent());
+
+          // Splitting the edge can reduce the number of PHI entries we have.
+          e = PN->getNumIncomingValues();
+          BB = NewBB;
+          i = PN->getBasicBlockIndex(BB);
+        }
+
+        std::pair<DenseMap<BasicBlock *, Value *>::iterator, bool> Pair =
+          Inserted.insert(std::make_pair(BB, static_cast<Value *>(0)));
+        if (!Pair.second)
+          PN->setIncomingValue(i, Pair.first->second);
+        else {
+          Value *FullV = Expand(LF, F, BB->getTerminator(), L, IVIncInsertPos,
+                                Rewriter, DeadInsts, SE, DT);
+
+          // If this is reuse-by-noop-cast, insert the noop cast.
+          if (FullV->getType() != OpTy)
+            FullV =
+              CastInst::Create(CastInst::getCastOpcode(FullV, false,
+                                                       OpTy, false),
+                               FullV, LF.OperandValToReplace->getType(),
+                               "tmp", BB->getTerminator());
+
+          PN->setIncomingValue(i, FullV);
+          Pair.first->second = FullV;
+        }
       }
+  } else {
+    Value *FullV = Expand(LF, F, LF.UserInst, L, IVIncInsertPos,
+                          Rewriter, DeadInsts, SE, DT);
+
+    // If this is reuse-by-noop-cast, insert the noop cast.
+    if (FullV->getType() != OpTy) {
+      Instruction *Cast =
+        CastInst::Create(CastInst::getCastOpcode(FullV, false, OpTy, false),
+                         FullV, OpTy, "tmp", LF.UserInst);
+      FullV = Cast;
     }
+
+    // Update the user. ICmpZero is handled specially here (for now) because
+    // Expand may have updated one of the operands of the icmp already, and
+    // its new value may happen to be equal to LF.OperandValToReplace, in
+    // which case doing replaceUsesOfWith leads to replacing both operands
+    // with the same value. TODO: Reorganize this.
+    if (Uses[LF.LUIdx].Kind == LSRUse::ICmpZero)
+      LF.UserInst->setOperand(0, FullV);
+    else
+      LF.UserInst->replaceUsesOfWith(LF.OperandValToReplace, FullV);
   }
 
-  Changed |= ThisChanged;
-  return ThisChanged;
+  DeadInsts.push_back(LF.OperandValToReplace);
 }
 
-bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager &LPM) {
-  IU = &getAnalysis<IVUsers>();
-  SE = &getAnalysis<ScalarEvolution>();
-  Changed = false;
+void
+LSRInstance::ImplementSolution(const SmallVectorImpl<const Formula *> &Solution,
+                               Pass *P) {
+  // Keep track of instructions we may have made dead, so that
+  // we can remove them after we are done working.
+  SmallVector<WeakVH, 16> DeadInsts;
+
+  SCEVExpander Rewriter(SE);
+  Rewriter.disableCanonicalMode();
+  Rewriter.setIVIncInsertPos(L, IVIncInsertPos);
 
-  // If LoopSimplify form is not available, stay out of trouble.
-  if (!L->getLoopPreheader() || !L->getLoopLatch())
-    return false;
+  // Expand the new value definitions and update the users.
+  for (size_t i = 0, e = Fixups.size(); i != e; ++i) {
+    size_t LUIdx = Fixups[i].LUIdx;
+
+    Rewrite(Fixups[i], *Solution[LUIdx], L, IVIncInsertPos, Rewriter,
+            DeadInsts, SE, DT, P);
+
+    Changed = true;
+  }
 
-  if (!IU->IVUsesByStride.empty()) {
-    DEBUG(dbgs() << "\nLSR on \"" << L->getHeader()->getParent()->getName()
-          << "\" ";
-          L->print(dbgs()));
+  // Clean up after ourselves. This must be done before deleting any
+  // instructions.
+  Rewriter.clear();
 
-    // Sort the StrideOrder so we process larger strides first.
-    std::stable_sort(IU->StrideOrder.begin(), IU->StrideOrder.end(),
-                     StrideCompare(SE));
+  Changed |= DeleteTriviallyDeadInstructions(DeadInsts);
+}
 
-    // Optimize induction variables.  Some indvar uses can be transformed to use
-    // strides that will be needed for other purposes.  A common example of this
-    // is the exit test for the loop, which can often be rewritten to use the
-    // computation of some other indvar to decide when to terminate the loop.
-    OptimizeIndvars(L);
+LSRInstance::LSRInstance(const TargetLowering *tli, Loop *l, Pass *P)
+  : IU(P->getAnalysis<IVUsers>()),
+    SE(P->getAnalysis<ScalarEvolution>()),
+    DT(P->getAnalysis<DominatorTree>()),
+    TLI(tli), L(l), Changed(false), IVIncInsertPos(0) {
 
-    // Change loop terminating condition to use the postinc iv when possible
-    // and optimize loop terminating compare. FIXME: Move this after
-    // StrengthReduceIVUsersOfStride?
-    OptimizeLoopTermCond(L);
+  // If LoopSimplify form is not available, stay out of trouble.
+  if (!L->isLoopSimplifyForm()) return;
+
+  // If there's no interesting work to be done, bail early.
+  if (IU.empty()) return;
+
+  DEBUG(dbgs() << "\nLSR on loop ";
+        WriteAsOperand(dbgs(), L->getHeader(), /*PrintType=*/false);
+        dbgs() << ":\n");
+
+  /// OptimizeShadowIV - If IV is used in a int-to-float cast
+  /// inside the loop then try to eliminate the cast opeation.
+  OptimizeShadowIV();
+
+  // Change loop terminating condition to use the postinc iv when possible.
+  Changed |= OptimizeLoopTermCond();
+
+  CollectInterestingTypesAndFactors();
+  CollectFixupsAndInitialFormulae();
+  CollectLoopInvariantFixupsAndFormulae();
+
+  DEBUG(dbgs() << "LSR found " << Uses.size() << " uses:\n";
+        print_uses(dbgs()));
+
+  // Now use the reuse data to generate a bunch of interesting ways
+  // to formulate the values needed for the uses.
+  GenerateAllReuseFormulae();
+
+  DEBUG(dbgs() << "\n"
+                  "After generating reuse formulae:\n";
+        print_uses(dbgs()));
+
+  FilterOutUndesirableDedicatedRegisters();
+  NarrowSearchSpaceUsingHeuristics();
+
+  SmallVector<const Formula *, 8> Solution;
+  Solve(Solution);
+  assert(Solution.size() == Uses.size() && "Malformed solution!");
+
+  // Release memory that is no longer needed.
+  Factors.clear();
+  Types.clear();
+  RegUses.clear();
+
+#ifndef NDEBUG
+  // Formulae should be legal.
+  for (SmallVectorImpl<LSRUse>::const_iterator I = Uses.begin(),
+       E = Uses.end(); I != E; ++I) {
+     const LSRUse &LU = *I;
+     for (SmallVectorImpl<Formula>::const_iterator J = LU.Formulae.begin(),
+          JE = LU.Formulae.end(); J != JE; ++J)
+        assert(isLegalUse(J->AM, LU.MinOffset, LU.MaxOffset,
+                          LU.Kind, LU.AccessTy, TLI) &&
+               "Illegal formula generated!");
+  };
+#endif
 
-    // FIXME: We can shrink overlarge IV's here.  e.g. if the code has
-    // computation in i64 values and the target doesn't support i64, demote
-    // the computation to 32-bit if safe.
+  // Now that we've decided what we want, make it so.
+  ImplementSolution(Solution, P);
+}
 
-    // FIXME: Attempt to reuse values across multiple IV's.  In particular, we
-    // could have something like "for(i) { foo(i*8); bar(i*16) }", which should
-    // be codegened as "for (j = 0;; j+=8) { foo(j); bar(j+j); }" on X86/PPC.
-    // Need to be careful that IV's are all the same type.  Only works for
-    // intptr_t indvars.
+void LSRInstance::print_factors_and_types(raw_ostream &OS) const {
+  if (Factors.empty() && Types.empty()) return;
 
-    // IVsByStride keeps IVs for one particular loop.
-    assert(IVsByStride.empty() && "Stale entries in IVsByStride?");
+  OS << "LSR has identified the following interesting factors and types: ";
+  bool First = true;
 
-    StrengthReduceIVUsers(L);
+  for (SmallSetVector<int64_t, 8>::const_iterator
+       I = Factors.begin(), E = Factors.end(); I != E; ++I) {
+    if (!First) OS << ", ";
+    First = false;
+    OS << '*' << *I;
+  }
 
-    // After all sharing is done, see if we can adjust the loop to test against
-    // zero instead of counting up to a maximum.  This is usually faster.
-    OptimizeLoopCountIV(L);
+  for (SmallSetVector<const Type *, 4>::const_iterator
+       I = Types.begin(), E = Types.end(); I != E; ++I) {
+    if (!First) OS << ", ";
+    First = false;
+    OS << '(' << **I << ')';
+  }
+  OS << '\n';
+}
 
-    // We're done analyzing this loop; release all the state we built up for it.
-    IVsByStride.clear();
+void LSRInstance::print_fixups(raw_ostream &OS) const {
+  OS << "LSR is examining the following fixup sites:\n";
+  for (SmallVectorImpl<LSRFixup>::const_iterator I = Fixups.begin(),
+       E = Fixups.end(); I != E; ++I) {
+    const LSRFixup &LF = *I;
+    dbgs() << "  ";
+    LF.print(OS);
+    OS << '\n';
+  }
+}
 
-    // Clean up after ourselves
-    DeleteTriviallyDeadInstructions();
+void LSRInstance::print_uses(raw_ostream &OS) const {
+  OS << "LSR is examining the following uses:\n";
+  for (SmallVectorImpl<LSRUse>::const_iterator I = Uses.begin(),
+       E = Uses.end(); I != E; ++I) {
+    const LSRUse &LU = *I;
+    dbgs() << "  ";
+    LU.print(OS);
+    OS << '\n';
+    for (SmallVectorImpl<Formula>::const_iterator J = LU.Formulae.begin(),
+         JE = LU.Formulae.end(); J != JE; ++J) {
+      OS << "    ";
+      J->print(OS);
+      OS << '\n';
+    }
   }
+}
+
+void LSRInstance::print(raw_ostream &OS) const {
+  print_factors_and_types(OS);
+  print_fixups(OS);
+  print_uses(OS);
+}
+
+void LSRInstance::dump() const {
+  print(errs()); errs() << '\n';
+}
+
+namespace {
+
+class LoopStrengthReduce : public LoopPass {
+  /// TLI - Keep a pointer of a TargetLowering to consult for determining
+  /// transformation profitability.
+  const TargetLowering *const TLI;
+
+public:
+  static char ID; // Pass ID, replacement for typeid
+  explicit LoopStrengthReduce(const TargetLowering *tli = 0);
+
+private:
+  bool runOnLoop(Loop *L, LPPassManager &LPM);
+  void getAnalysisUsage(AnalysisUsage &AU) const;
+};
+
+}
+
+char LoopStrengthReduce::ID = 0;
+static RegisterPass<LoopStrengthReduce>
+X("loop-reduce", "Loop Strength Reduction");
+
+Pass *llvm::createLoopStrengthReducePass(const TargetLowering *TLI) {
+  return new LoopStrengthReduce(TLI);
+}
+
+LoopStrengthReduce::LoopStrengthReduce(const TargetLowering *tli)
+  : LoopPass(&ID), TLI(tli) {}
+
+void LoopStrengthReduce::getAnalysisUsage(AnalysisUsage &AU) const {
+  // We split critical edges, so we change the CFG.  However, we do update
+  // many analyses if they are around.
+  AU.addPreservedID(LoopSimplifyID);
+  AU.addPreserved<LoopInfo>();
+  AU.addPreserved("domfrontier");
+
+  AU.addRequiredID(LoopSimplifyID);
+  AU.addRequired<DominatorTree>();
+  AU.addPreserved<DominatorTree>();
+  AU.addRequired<ScalarEvolution>();
+  AU.addPreserved<ScalarEvolution>();
+  AU.addRequired<IVUsers>();
+  AU.addPreserved<IVUsers>();
+}
+
+bool LoopStrengthReduce::runOnLoop(Loop *L, LPPassManager & /*LPM*/) {
+  bool Changed = false;
+
+  // Run the main LSR transformation.
+  Changed |= LSRInstance(TLI, L, this).getChanged();
 
   // At this point, it is worth checking to see if any recurrence PHIs are also
   // dead, so that we can remove them as well.
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp
index ee8cb4f..a355ec3 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnrollPass.cpp
@@ -76,11 +76,12 @@ static RegisterPass<LoopUnroll> X("loop-unroll", "Unroll loops");
 Pass *llvm::createLoopUnrollPass() { return new LoopUnroll(); }
 
 /// ApproximateLoopSize - Approximate the size of the loop.
-static unsigned ApproximateLoopSize(const Loop *L) {
+static unsigned ApproximateLoopSize(const Loop *L, unsigned &NumCalls) {
   CodeMetrics Metrics;
   for (Loop::block_iterator I = L->block_begin(), E = L->block_end();
        I != E; ++I)
     Metrics.analyzeBasicBlock(*I);
+  NumCalls = Metrics.NumCalls;
   return Metrics.NumInsts;
 }
 
@@ -110,8 +111,13 @@ bool LoopUnroll::runOnLoop(Loop *L, LPPassManager &LPM) {
 
   // Enforce the threshold.
   if (UnrollThreshold != NoThreshold) {
-    unsigned LoopSize = ApproximateLoopSize(L);
+    unsigned NumCalls;
+    unsigned LoopSize = ApproximateLoopSize(L, NumCalls);
     DEBUG(dbgs() << "  Loop Size = " << LoopSize << "\n");
+    if (NumCalls != 0) {
+      DEBUG(dbgs() << "  Not unrolling loop with function calls.\n");
+      return false;
+    }
     uint64_t Size = (uint64_t)LoopSize*Count;
     if (TripCount != 1 && Size > UnrollThreshold) {
       DEBUG(dbgs() << "  Too large to fully unroll with count: " << Count
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
index 527a7b5..e5fba28 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/LoopUnswitch.cpp
@@ -169,6 +169,10 @@ Pass *llvm::createLoopUnswitchPass(bool Os) {
 /// invariant in the loop, or has an invariant piece, return the invariant.
 /// Otherwise, return null.
 static Value *FindLIVLoopCondition(Value *Cond, Loop *L, bool &Changed) {
+  // We can never unswitch on vector conditions.
+  if (isa<VectorType>(Cond->getType()))
+    return 0;
+
   // Constants should be folded, not unswitched on!
   if (isa<Constant>(Cond)) return 0;
 
@@ -401,7 +405,7 @@ bool LoopUnswitch::IsTrivialUnswitchCondition(Value *Cond, Constant **Val,
 /// UnswitchIfProfitable - We have found that we can unswitch currentLoop when
 /// LoopCond == Val to simplify the loop.  If we decide that this is profitable,
 /// unswitch the loop, reprocess the pieces, then return true.
-bool LoopUnswitch::UnswitchIfProfitable(Value *LoopCond, Constant *Val){
+bool LoopUnswitch::UnswitchIfProfitable(Value *LoopCond, Constant *Val) {
 
   initLoopData();
 
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
index 4a99f4a..bbd4b45 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/Reassociate.cpp
@@ -249,7 +249,7 @@ void Reassociate::LinearizeExpr(BinaryOperator *I) {
 
 /// LinearizeExprTree - Given an associative binary expression tree, traverse
 /// all of the uses putting it into canonical form.  This forces a left-linear
-/// form of the the expression (((a+b)+c)+d), and collects information about the
+/// form of the expression (((a+b)+c)+d), and collects information about the
 /// rank of the non-tree operands.
 ///
 /// NOTE: These intentionally destroys the expression tree operands (turning
@@ -299,7 +299,7 @@ void Reassociate::LinearizeExprTree(BinaryOperator *I,
     Success = false;
     MadeChange = true;
   } else if (RHSBO) {
-    // Turn (A+B)+(C+D) -> (((A+B)+C)+D).  This guarantees the the RHS is not
+    // Turn (A+B)+(C+D) -> (((A+B)+C)+D).  This guarantees the RHS is not
     // part of the expression tree.
     LinearizeExpr(I);
     LHS = LHSBO = cast<BinaryOperator>(I->getOperand(0));
@@ -933,6 +933,15 @@ void Reassociate::ReassociateBB(BasicBlock *BB) {
         isa<VectorType>(BI->getType()))
       continue;  // Floating point ops are not associative.
 
+    // Do not reassociate boolean (i1) expressions.  We want to preserve the
+    // original order of evaluation for short-circuited comparisons that
+    // SimplifyCFG has folded to AND/OR expressions.  If the expression
+    // is not further optimized, it is likely to be transformed back to a
+    // short-circuited form for code gen, and the source order may have been
+    // optimized for the most likely conditions.
+    if (BI->getType()->isInteger(1))
+      continue;
+
     // If this is a subtract instruction which is not already in negate form,
     // see if we can convert it to X+-Y.
     if (BI->getOpcode() == Instruction::Sub) {
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
index 1cf486b..900d119 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/ScalarReplAggregates.cpp
@@ -202,12 +202,18 @@ bool SROA::performPromotion(Function &F) {
   return Changed;
 }
 
-/// getNumSAElements - Return the number of elements in the specific struct or
-/// array.
-static uint64_t getNumSAElements(const Type *T) {
+/// ShouldAttemptScalarRepl - Decide if an alloca is a good candidate for
+/// SROA.  It must be a struct or array type with a small number of elements.
+static bool ShouldAttemptScalarRepl(AllocaInst *AI) {
+  const Type *T = AI->getAllocatedType();
+  // Do not promote any struct into more than 32 separate vars.
   if (const StructType *ST = dyn_cast<StructType>(T))
-    return ST->getNumElements();
-  return cast<ArrayType>(T)->getNumElements();
+    return ST->getNumElements() <= 32;
+  // Arrays are much less likely to be safe for SROA; only consider
+  // them if they are very small.
+  if (const ArrayType *AT = dyn_cast<ArrayType>(T))
+    return AT->getNumElements() <= 8;
+  return false;
 }
 
 // performScalarRepl - This algorithm is a simple worklist driven algorithm,
@@ -266,22 +272,18 @@ bool SROA::performScalarRepl(Function &F) {
     // Do not promote [0 x %struct].
     if (AllocaSize == 0) continue;
 
+    // If the alloca looks like a good candidate for scalar replacement, and if
+    // all its users can be transformed, then split up the aggregate into its
+    // separate elements.
+    if (ShouldAttemptScalarRepl(AI) && isSafeAllocaToScalarRepl(AI)) {
+      DoScalarReplacement(AI, WorkList);
+      Changed = true;
+      continue;
+    }
+
     // Do not promote any struct whose size is too big.
     if (AllocaSize > SRThreshold) continue;
 
-    if ((isa<StructType>(AI->getAllocatedType()) ||
-         isa<ArrayType>(AI->getAllocatedType())) &&
-        // Do not promote any struct into more than "32" separate vars.
-        getNumSAElements(AI->getAllocatedType()) <= SRThreshold/4) {
-      // Check that all of the users of the allocation are capable of being
-      // transformed.
-      if (isSafeAllocaToScalarRepl(AI)) {
-        DoScalarReplacement(AI, WorkList);
-        Changed = true;
-        continue;
-      }
-    }
-
     // If we can turn this aggregate value (potentially with casts) into a
     // simple scalar value that can be mem2reg'd into a register value.
     // IsNotTrivial tracks whether this is something that mem2reg could have
@@ -681,7 +683,7 @@ void SROA::RewriteGEP(GetElementPtrInst *GEPI, AllocaInst *AI, uint64_t Offset,
     Val->takeName(GEPI);
   }
   if (Val->getType() != GEPI->getType())
-    Val = new BitCastInst(Val, GEPI->getType(), Val->getNameStr(), GEPI);
+    Val = new BitCastInst(Val, GEPI->getType(), Val->getName(), GEPI);
   GEPI->replaceAllUsesWith(Val);
   DeadInsts.push_back(GEPI);
 }
@@ -769,7 +771,7 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
       Value *Idx[2] = { Zero,
                       ConstantInt::get(Type::getInt32Ty(MI->getContext()), i) };
       OtherElt = GetElementPtrInst::CreateInBounds(OtherPtr, Idx, Idx + 2,
-                                           OtherPtr->getNameStr()+"."+Twine(i),
+                                              OtherPtr->getName()+"."+Twine(i),
                                                    MI);
       uint64_t EltOffset;
       const PointerType *OtherPtrTy = cast<PointerType>(OtherPtr->getType());
@@ -853,12 +855,11 @@ void SROA::RewriteMemIntrinUserOfAlloca(MemIntrinsic *MI, Instruction *Inst,
     
     // Cast the element pointer to BytePtrTy.
     if (EltPtr->getType() != BytePtrTy)
-      EltPtr = new BitCastInst(EltPtr, BytePtrTy, EltPtr->getNameStr(), MI);
+      EltPtr = new BitCastInst(EltPtr, BytePtrTy, EltPtr->getName(), MI);
     
     // Cast the other pointer (if we have one) to BytePtrTy. 
     if (OtherElt && OtherElt->getType() != BytePtrTy)
-      OtherElt = new BitCastInst(OtherElt, BytePtrTy,OtherElt->getNameStr(),
-                                 MI);
+      OtherElt = new BitCastInst(OtherElt, BytePtrTy, OtherElt->getName(), MI);
     
     unsigned EltSize = TD->getTypeAllocSize(EltTy);
     
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
index 43447de..62f34a2 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp
@@ -30,6 +30,7 @@
 #include "llvm/Attributes.h"
 #include "llvm/Support/CFG.h"
 #include "llvm/Pass.h"
+#include "llvm/Target/TargetData.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/Statistic.h"
@@ -261,7 +262,7 @@ static bool MergeEmptyReturnBlocks(Function &F) {
 
 /// IterativeSimplifyCFG - Call SimplifyCFG on all the blocks in the function,
 /// iterating until no more changes are made.
-static bool IterativeSimplifyCFG(Function &F) {
+static bool IterativeSimplifyCFG(Function &F, const TargetData *TD) {
   bool Changed = false;
   bool LocalChange = true;
   while (LocalChange) {
@@ -271,7 +272,7 @@ static bool IterativeSimplifyCFG(Function &F) {
     // if they are unneeded...
     //
     for (Function::iterator BBIt = ++F.begin(); BBIt != F.end(); ) {
-      if (SimplifyCFG(BBIt++)) {
+      if (SimplifyCFG(BBIt++, TD)) {
         LocalChange = true;
         ++NumSimpl;
       }
@@ -285,10 +286,11 @@ static bool IterativeSimplifyCFG(Function &F) {
 // simplify the CFG.
 //
 bool CFGSimplifyPass::runOnFunction(Function &F) {
+  const TargetData *TD = getAnalysisIfAvailable<TargetData>();
   bool EverChanged = RemoveUnreachableBlocksFromFn(F);
   EverChanged |= MergeEmptyReturnBlocks(F);
-  EverChanged |= IterativeSimplifyCFG(F);
-  
+  EverChanged |= IterativeSimplifyCFG(F, TD);
+
   // If neither pass changed anything, we're done.
   if (!EverChanged) return false;
 
@@ -299,11 +301,11 @@ bool CFGSimplifyPass::runOnFunction(Function &F) {
   // RemoveUnreachableBlocksFromFn doesn't do anything.
   if (!RemoveUnreachableBlocksFromFn(F))
     return true;
-  
+
   do {
-    EverChanged = IterativeSimplifyCFG(F);
+    EverChanged = IterativeSimplifyCFG(F, TD);
     EverChanged |= RemoveUnreachableBlocksFromFn(F);
   } while (EverChanged);
-  
+
   return true;
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
index 5acd6aa..4464961 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyHalfPowrLibCalls.cpp
@@ -68,7 +68,7 @@ InlineHalfPowrs(const std::vector<Instruction *> &HalfPowrs,
     Function *Callee = Call->getCalledFunction();
 
     // Minimally sanity-check the CFG of half_powr to ensure that it contains
-    // the the kind of code we expect.  If we're running this pass, we have
+    // the kind of code we expect.  If we're running this pass, we have
     // reason to believe it will be what we expect.
     Function::iterator I = Callee->begin();
     BasicBlock *Prologue = I++;
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
index a49da9c..4216e8f 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/SimplifyLibCalls.cpp
@@ -152,7 +152,7 @@ Value *LibCallOptimization::EmitStrLen(Value *Ptr, IRBuilder<> &B) {
 
   Constant *StrLen =M->getOrInsertFunction("strlen", AttrListPtr::get(AWI, 2),
                                            TD->getIntPtrType(*Context),
-					   Type::getInt8PtrTy(*Context),
+                                           Type::getInt8PtrTy(*Context),
                                            NULL);
   CallInst *CI = B.CreateCall(StrLen, CastToCStr(Ptr, B), "strlen");
   if (const Function *F = dyn_cast<Function>(StrLen->stripPointerCasts()))
@@ -232,10 +232,10 @@ Value *LibCallOptimization::EmitMemChr(Value *Ptr, Value *Val,
   AWI = AttributeWithIndex::get(~0u, Attribute::ReadOnly | Attribute::NoUnwind);
 
   Value *MemChr = M->getOrInsertFunction("memchr", AttrListPtr::get(&AWI, 1),
-					 Type::getInt8PtrTy(*Context),
-					 Type::getInt8PtrTy(*Context),
+                                         Type::getInt8PtrTy(*Context),
+                                         Type::getInt8PtrTy(*Context),
                                          Type::getInt32Ty(*Context),
-					 TD->getIntPtrType(*Context),
+                                         TD->getIntPtrType(*Context),
                                          NULL);
   CallInst *CI = B.CreateCall3(MemChr, CastToCStr(Ptr, B), Val, Len, "memchr");
 
@@ -321,9 +321,9 @@ Value *LibCallOptimization::EmitPutChar(Value *Char, IRBuilder<> &B) {
                                           Type::getInt32Ty(*Context), NULL);
   CallInst *CI = B.CreateCall(PutChar,
                               B.CreateIntCast(Char,
-					      Type::getInt32Ty(*Context),
-                                              /*isSigned*/true,
-					      "chari"),
+                              Type::getInt32Ty(*Context),
+                              /*isSigned*/true,
+                              "chari"),
                               "putchar");
 
   if (const Function *F = dyn_cast<Function>(PutChar->stripPointerCasts()))
@@ -341,7 +341,7 @@ void LibCallOptimization::EmitPutS(Value *Str, IRBuilder<> &B) {
 
   Value *PutS = M->getOrInsertFunction("puts", AttrListPtr::get(AWI, 2),
                                        Type::getInt32Ty(*Context),
-                                    Type::getInt8PtrTy(*Context),
+                                       Type::getInt8PtrTy(*Context),
                                        NULL);
   CallInst *CI = B.CreateCall(PutS, CastToCStr(Str, B), "puts");
   if (const Function *F = dyn_cast<Function>(PutS->stripPointerCasts()))
@@ -359,13 +359,13 @@ void LibCallOptimization::EmitFPutC(Value *Char, Value *File, IRBuilder<> &B) {
   Constant *F;
   if (isa<PointerType>(File->getType()))
     F = M->getOrInsertFunction("fputc", AttrListPtr::get(AWI, 2),
-			       Type::getInt32Ty(*Context),
+                               Type::getInt32Ty(*Context),
                                Type::getInt32Ty(*Context), File->getType(),
-			       NULL);
+                               NULL);
   else
     F = M->getOrInsertFunction("fputc",
-			       Type::getInt32Ty(*Context),
-			       Type::getInt32Ty(*Context),
+                               Type::getInt32Ty(*Context),
+                               Type::getInt32Ty(*Context),
                                File->getType(), NULL);
   Char = B.CreateIntCast(Char, Type::getInt32Ty(*Context), /*isSigned*/true,
                          "chari");
@@ -386,7 +386,7 @@ void LibCallOptimization::EmitFPutS(Value *Str, Value *File, IRBuilder<> &B) {
   Constant *F;
   if (isa<PointerType>(File->getType()))
     F = M->getOrInsertFunction("fputs", AttrListPtr::get(AWI, 3),
-			       Type::getInt32Ty(*Context),
+                               Type::getInt32Ty(*Context),
                                Type::getInt8PtrTy(*Context),
                                File->getType(), NULL);
   else
@@ -414,13 +414,13 @@ void LibCallOptimization::EmitFWrite(Value *Ptr, Value *Size, Value *File,
                                TD->getIntPtrType(*Context),
                                Type::getInt8PtrTy(*Context),
                                TD->getIntPtrType(*Context),
-			       TD->getIntPtrType(*Context),
+                               TD->getIntPtrType(*Context),
                                File->getType(), NULL);
   else
     F = M->getOrInsertFunction("fwrite", TD->getIntPtrType(*Context),
                                Type::getInt8PtrTy(*Context),
                                TD->getIntPtrType(*Context),
-			       TD->getIntPtrType(*Context),
+                               TD->getIntPtrType(*Context),
                                File->getType(), NULL);
   CallInst *CI = B.CreateCall4(F, CastToCStr(Ptr, B), Size,
                         ConstantInt::get(TD->getIntPtrType(*Context), 1), File);
@@ -1203,22 +1203,23 @@ struct MemMoveChkOpt : public LibCallOptimization {
 
 struct StrCpyChkOpt : public LibCallOptimization {
   virtual Value *CallOptimizer(Function *Callee, CallInst *CI, IRBuilder<> &B) {
-    // These optimizations require TargetData.
-    if (!TD) return 0;
-
     const FunctionType *FT = Callee->getFunctionType();
     if (FT->getNumParams() != 3 || FT->getReturnType() != FT->getParamType(0) ||
         !isa<PointerType>(FT->getParamType(0)) ||
-        !isa<PointerType>(FT->getParamType(1)) ||
-        !isa<IntegerType>(FT->getParamType(2)))
+        !isa<PointerType>(FT->getParamType(1)))
       return 0;
 
     ConstantInt *SizeCI = dyn_cast<ConstantInt>(CI->getOperand(3));
     if (!SizeCI)
       return 0;
     
-    // We don't have any length information, just lower to a plain strcpy.
-    if (SizeCI->isAllOnesValue())
+    // If a) we don't have any length information, or b) we know this will
+    // fit then just lower to a plain strcpy. Otherwise we'll keep our
+    // strcpy_chk call which may fail at runtime if the size is too long.
+    // TODO: It might be nice to get a maximum length out of the possible
+    // string lengths for varying.
+    if (SizeCI->isAllOnesValue() ||
+        SizeCI->getZExtValue() >= GetStringLength(CI->getOperand(2)))
       return EmitStrCpy(CI->getOperand(1), CI->getOperand(2), B);
 
     return 0;
@@ -1327,7 +1328,7 @@ struct Exp2Opt : public LibCallOptimization {
       Module *M = Caller->getParent();
       Value *Callee = M->getOrInsertFunction(Name, Op->getType(),
                                              Op->getType(),
-					     Type::getInt32Ty(*Context),NULL);
+                                             Type::getInt32Ty(*Context),NULL);
       CallInst *CI = B.CreateCall2(Callee, One, LdExpArg);
       if (const Function *F = dyn_cast<Function>(Callee->stripPointerCasts()))
         CI->setCallingConv(F->getCallingConv());
diff --git a/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp b/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
index 4119cb9..162d902 100644
--- a/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Scalar/TailRecursionElimination.cpp
@@ -211,7 +211,8 @@ bool TailCallElim::CanMoveAboveCall(Instruction *I, CallInst *CI) {
       // FIXME: Writes to memory only matter if they may alias the pointer
       // being loaded from.
       if (CI->mayWriteToMemory() ||
-          !isSafeToLoadUnconditionally(L->getPointerOperand(), L))
+          !isSafeToLoadUnconditionally(L->getPointerOperand(), L,
+                                       L->getAlignment()))
         return false;
     }
   }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
index 19c7206..3657390 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/BreakCriticalEdges.cpp
@@ -179,7 +179,7 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   // Create a new basic block, linking it into the CFG.
   BasicBlock *NewBB = BasicBlock::Create(TI->getContext(),
                       TIBB->getName() + "." + DestBB->getName() + "_crit_edge");
-  // Create our unconditional branch...
+  // Create our unconditional branch.
   BranchInst::Create(DestBB, NewBB);
 
   // Branch to the new block, breaking the edge.
@@ -192,16 +192,47 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   
   // If there are any PHI nodes in DestBB, we need to update them so that they
   // merge incoming values from NewBB instead of from TIBB.
-  //
-  for (BasicBlock::iterator I = DestBB->begin(); isa<PHINode>(I); ++I) {
-    PHINode *PN = cast<PHINode>(I);
-    // We no longer enter through TIBB, now we come in through NewBB.  Revector
-    // exactly one entry in the PHI node that used to come from TIBB to come
-    // from NewBB.
-    int BBIdx = PN->getBasicBlockIndex(TIBB);
-    PN->setIncomingBlock(BBIdx, NewBB);
+  if (PHINode *APHI = dyn_cast<PHINode>(DestBB->begin())) {
+    // This conceptually does:
+    //  foreach (PHINode *PN in DestBB)
+    //    PN->setIncomingBlock(PN->getIncomingBlock(TIBB), NewBB);
+    // but is optimized for two cases.
+    
+    if (APHI->getNumIncomingValues() <= 8) {  // Small # preds case.
+      unsigned BBIdx = 0;
+      for (BasicBlock::iterator I = DestBB->begin(); isa<PHINode>(I); ++I) {
+        // We no longer enter through TIBB, now we come in through NewBB.
+        // Revector exactly one entry in the PHI node that used to come from
+        // TIBB to come from NewBB.
+        PHINode *PN = cast<PHINode>(I);
+        
+        // Reuse the previous value of BBIdx if it lines up.  In cases where we
+        // have multiple phi nodes with *lots* of predecessors, this is a speed
+        // win because we don't have to scan the PHI looking for TIBB.  This
+        // happens because the BB list of PHI nodes are usually in the same
+        // order.
+        if (PN->getIncomingBlock(BBIdx) != TIBB)
+          BBIdx = PN->getBasicBlockIndex(TIBB);
+        PN->setIncomingBlock(BBIdx, NewBB);
+      }
+    } else {
+      // However, the foreach loop is slow for blocks with lots of predecessors
+      // because PHINode::getIncomingBlock is O(n) in # preds.  Instead, walk
+      // the user list of TIBB to find the PHI nodes.
+      SmallPtrSet<PHINode*, 16> UpdatedPHIs;
+    
+      for (Value::use_iterator UI = TIBB->use_begin(), E = TIBB->use_end();
+           UI != E; ) {
+        Value::use_iterator Use = UI++;
+        if (PHINode *PN = dyn_cast<PHINode>(Use)) {
+          // Remove one entry from each PHI.
+          if (PN->getParent() == DestBB && UpdatedPHIs.insert(PN))
+            PN->setOperand(Use.getOperandNo(), NewBB);
+        }
+      }
+    }
   }
-  
+   
   // If there are any other edges from TIBB to DestBB, update those to go
   // through the split block, making those edges non-critical as well (and
   // reducing the number of phi entries in the DestBB if relevant).
@@ -221,6 +252,15 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
 
   // If we don't have a pass object, we can't update anything...
   if (P == 0) return NewBB;
+  
+  DominatorTree *DT = P->getAnalysisIfAvailable<DominatorTree>();
+  DominanceFrontier *DF = P->getAnalysisIfAvailable<DominanceFrontier>();
+  LoopInfo *LI = P->getAnalysisIfAvailable<LoopInfo>();
+  ProfileInfo *PI = P->getAnalysisIfAvailable<ProfileInfo>();
+  
+  // If we have nothing to update, just return.
+  if (DT == 0 && DF == 0 && LI == 0 && PI == 0)
+    return NewBB;
 
   // Now update analysis information.  Since the only predecessor of NewBB is
   // the TIBB, TIBB clearly dominates NewBB.  TIBB usually doesn't dominate
@@ -229,14 +269,23 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   // loop header) then NewBB dominates DestBB.
   SmallVector<BasicBlock*, 8> OtherPreds;
 
-  for (pred_iterator I = pred_begin(DestBB), E = pred_end(DestBB); I != E; ++I)
-    if (*I != NewBB)
-      OtherPreds.push_back(*I);
+  // If there is a PHI in the block, loop over predecessors with it, which is
+  // faster than iterating pred_begin/end.
+  if (PHINode *PN = dyn_cast<PHINode>(DestBB->begin())) {
+    for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i)
+      if (PN->getIncomingBlock(i) != NewBB)
+        OtherPreds.push_back(PN->getIncomingBlock(i));
+  } else {
+    for (pred_iterator I = pred_begin(DestBB), E = pred_end(DestBB);
+         I != E; ++I)
+      if (*I != NewBB)
+        OtherPreds.push_back(*I);
+  }
   
   bool NewBBDominatesDestBB = true;
   
   // Should we update DominatorTree information?
-  if (DominatorTree *DT = P->getAnalysisIfAvailable<DominatorTree>()) {
+  if (DT) {
     DomTreeNode *TINode = DT->getNode(TIBB);
 
     // The new block is not the immediate dominator for any other nodes, but
@@ -267,7 +316,7 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   }
 
   // Should we update DominanceFrontier information?
-  if (DominanceFrontier *DF = P->getAnalysisIfAvailable<DominanceFrontier>()) {
+  if (DF) {
     // If NewBBDominatesDestBB hasn't been computed yet, do so with DF.
     if (!OtherPreds.empty()) {
       // FIXME: IMPLEMENT THIS!
@@ -301,7 +350,7 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   }
   
   // Update LoopInfo if it is around.
-  if (LoopInfo *LI = P->getAnalysisIfAvailable<LoopInfo>()) {
+  if (LI) {
     if (Loop *TIL = LI->getLoopFor(TIBB)) {
       // If one or the other blocks were not in a loop, the new block is not
       // either, and thus LI doesn't need to be updated.
@@ -382,9 +431,8 @@ BasicBlock *llvm::SplitCriticalEdge(TerminatorInst *TI, unsigned SuccNum,
   }
 
   // Update ProfileInfo if it is around.
-  if (ProfileInfo *PI = P->getAnalysisIfAvailable<ProfileInfo>()) {
-    PI->splitEdge(TIBB,DestBB,NewBB,MergeIdenticalEdges);
-  }
+  if (PI)
+    PI->splitEdge(TIBB, DestBB, NewBB, MergeIdenticalEdges);
 
   return NewBB;
 }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
index bd750cc..c80827d 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/CloneFunction.cpp
@@ -33,7 +33,7 @@ using namespace llvm;
 // CloneBasicBlock - See comments in Cloning.h
 BasicBlock *llvm::CloneBasicBlock(const BasicBlock *BB,
                                   DenseMap<const Value*, Value*> &ValueMap,
-                                  const char *NameSuffix, Function *F,
+                                  const Twine &NameSuffix, Function *F,
                                   ClonedCodeInfo *CodeInfo) {
   BasicBlock *NewBB = BasicBlock::Create(BB->getContext(), "", F);
   if (BB->hasName()) NewBB->setName(BB->getName()+NameSuffix);
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
index 92bdf2d..7e7973a 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/Local.cpp
@@ -38,20 +38,82 @@ using namespace llvm;
 //  Local analysis.
 //
 
+/// getUnderlyingObjectWithOffset - Strip off up to MaxLookup GEPs and
+/// bitcasts to get back to the underlying object being addressed, keeping
+/// track of the offset in bytes from the GEPs relative to the result.
+/// This is closely related to Value::getUnderlyingObject but is located
+/// here to avoid making VMCore depend on TargetData.
+static Value *getUnderlyingObjectWithOffset(Value *V, const TargetData *TD,
+                                            uint64_t &ByteOffset,
+                                            unsigned MaxLookup = 6) {
+  if (!isa<PointerType>(V->getType()))
+    return V;
+  for (unsigned Count = 0; MaxLookup == 0 || Count < MaxLookup; ++Count) {
+    if (GEPOperator *GEP = dyn_cast<GEPOperator>(V)) {
+      if (!GEP->hasAllConstantIndices())
+        return V;
+      SmallVector<Value*, 8> Indices(GEP->op_begin() + 1, GEP->op_end());
+      ByteOffset += TD->getIndexedOffset(GEP->getPointerOperandType(),
+                                         &Indices[0], Indices.size());
+      V = GEP->getPointerOperand();
+    } else if (Operator::getOpcode(V) == Instruction::BitCast) {
+      V = cast<Operator>(V)->getOperand(0);
+    } else if (GlobalAlias *GA = dyn_cast<GlobalAlias>(V)) {
+      if (GA->mayBeOverridden())
+        return V;
+      V = GA->getAliasee();
+    } else {
+      return V;
+    }
+    assert(isa<PointerType>(V->getType()) && "Unexpected operand type!");
+  }
+  return V;
+}
+
 /// isSafeToLoadUnconditionally - Return true if we know that executing a load
 /// from this value cannot trap.  If it is not obviously safe to load from the
 /// specified pointer, we do a quick local scan of the basic block containing
 /// ScanFrom, to determine if the address is already accessed.
-bool llvm::isSafeToLoadUnconditionally(Value *V, Instruction *ScanFrom) {
-  // If it is an alloca it is always safe to load from.
-  if (isa<AllocaInst>(V)) return true;
+bool llvm::isSafeToLoadUnconditionally(Value *V, Instruction *ScanFrom,
+                                       unsigned Align, const TargetData *TD) {
+  uint64_t ByteOffset = 0;
+  Value *Base = V;
+  if (TD)
+    Base = getUnderlyingObjectWithOffset(V, TD, ByteOffset);
+
+  const Type *BaseType = 0;
+  unsigned BaseAlign = 0;
+  if (const AllocaInst *AI = dyn_cast<AllocaInst>(Base)) {
+    // An alloca is safe to load from as load as it is suitably aligned.
+    BaseType = AI->getAllocatedType();
+    BaseAlign = AI->getAlignment();
+  } else if (const GlobalValue *GV = dyn_cast<GlobalValue>(Base)) {
+    // Global variables are safe to load from but their size cannot be
+    // guaranteed if they are overridden.
+    if (!isa<GlobalAlias>(GV) && !GV->mayBeOverridden()) {
+      BaseType = GV->getType()->getElementType();
+      BaseAlign = GV->getAlignment();
+    }
+  }
+
+  if (BaseType && BaseType->isSized()) {
+    if (TD && BaseAlign == 0)
+      BaseAlign = TD->getPrefTypeAlignment(BaseType);
+
+    if (Align <= BaseAlign) {
+      if (!TD)
+        return true; // Loading directly from an alloca or global is OK.
 
-  // If it is a global variable it is mostly safe to load from.
-  if (const GlobalValue *GV = dyn_cast<GlobalVariable>(V))
-    // Don't try to evaluate aliases.  External weak GV can be null.
-    return !isa<GlobalAlias>(GV) && !GV->hasExternalWeakLinkage();
+      // Check if the load is within the bounds of the underlying object.
+      const PointerType *AddrTy = cast<PointerType>(V->getType());
+      uint64_t LoadSize = TD->getTypeStoreSize(AddrTy->getElementType());
+      if (ByteOffset + LoadSize <= TD->getTypeAllocSize(BaseType) &&
+          (Align == 0 || (ByteOffset % Align) == 0))
+        return true;
+    }
+  }
 
-  // Otherwise, be a little bit agressive by scanning the local block where we
+  // Otherwise, be a little bit aggressive by scanning the local block where we
   // want to check to see if the pointer is already being loaded or stored
   // from/to.  If so, the previous load or store would have already trapped,
   // so there is no harm doing an extra load (also, CSE will later eliminate
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
index e81b779..57bab60 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopSimplify.cpp
@@ -176,8 +176,9 @@ ReprocessLoop:
   SmallVector<BasicBlock*, 8> ExitBlocks;
   L->getExitBlocks(ExitBlocks);
     
-  SetVector<BasicBlock*> ExitBlockSet(ExitBlocks.begin(), ExitBlocks.end());
-  for (SetVector<BasicBlock*>::iterator I = ExitBlockSet.begin(),
+  SmallSetVector<BasicBlock *, 8> ExitBlockSet(ExitBlocks.begin(),
+                                               ExitBlocks.end());
+  for (SmallSetVector<BasicBlock *, 8>::iterator I = ExitBlockSet.begin(),
          E = ExitBlockSet.end(); I != E; ++I) {
     BasicBlock *ExitBlock = *I;
     for (pred_iterator PI = pred_begin(ExitBlock), PE = pred_end(ExitBlock);
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
index 53117a0..e47c86d 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/LoopUnroll.cpp
@@ -29,7 +29,6 @@
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/Transforms/Utils/Cloning.h"
 #include "llvm/Transforms/Utils/Local.h"
-#include <cstdio>
 
 using namespace llvm;
 
@@ -204,15 +203,12 @@ bool llvm::UnrollLoop(Loop *L, unsigned Count, LoopInfo* LI, LPPassManager* LPM)
   Latches.push_back(LatchBlock);
 
   for (unsigned It = 1; It != Count; ++It) {
-    char SuffixBuffer[100];
-    sprintf(SuffixBuffer, ".%d", It);
-    
     std::vector<BasicBlock*> NewBlocks;
     
     for (std::vector<BasicBlock*>::iterator BB = LoopBlocks.begin(),
          E = LoopBlocks.end(); BB != E; ++BB) {
       ValueMapTy ValueMap;
-      BasicBlock *New = CloneBasicBlock(*BB, ValueMap, SuffixBuffer);
+      BasicBlock *New = CloneBasicBlock(*BB, ValueMap, "." + Twine(It));
       Header->getParent()->getBasicBlockList().push_back(New);
 
       // Loop over all of the PHI nodes in the block, changing them to use the
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
index f6cb71a..544e20b 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/PromoteMemoryToRegister.cpp
@@ -85,8 +85,9 @@ bool llvm::isAllocaPromotable(const AllocaInst *AI) {
   return true;
 }
 
-/// Finds the llvm.dbg.declare intrinsic describing V, if any.
-static DbgDeclareInst *findDbgDeclare(Value *V) {
+/// FindAllocaDbgDeclare - Finds the llvm.dbg.declare intrinsic describing the
+/// alloca 'V', if any.
+static DbgDeclareInst *FindAllocaDbgDeclare(Value *V) {
   if (MDNode *DebugNode = MDNode::getIfExists(V->getContext(), &V, 1))
     for (Value::use_iterator UI = DebugNode->use_begin(),
          E = DebugNode->use_end(); UI != E; ++UI)
@@ -203,7 +204,7 @@ namespace {
     /// AllocaDbgDeclares - For each alloca, we keep track of the dbg.declare
     /// intrinsic that describes it, if any, so that we can convert it to a
     /// dbg.value intrinsic if the alloca gets promoted.
-    std::vector<DbgDeclareInst*> AllocaDbgDeclares;
+    SmallVector<DbgDeclareInst*, 8> AllocaDbgDeclares;
 
     /// Visited - The set of basic blocks the renamer has already visited.
     ///
@@ -219,6 +220,9 @@ namespace {
     PromoteMem2Reg(const std::vector<AllocaInst*> &A, DominatorTree &dt,
                    DominanceFrontier &df, AliasSetTracker *ast)
       : Allocas(A), DT(dt), DF(df), DIF(0), AST(ast) {}
+    ~PromoteMem2Reg() {
+      delete DIF;
+    }
 
     void run();
 
@@ -260,8 +264,7 @@ namespace {
                                   LargeBlockInfo &LBI);
     void PromoteSingleBlockAlloca(AllocaInst *AI, AllocaInfo &Info,
                                   LargeBlockInfo &LBI);
-    void ConvertDebugDeclareToDebugValue(DbgDeclareInst *DDI, StoreInst *SI,
-                                         uint64_t Offset);
+    void ConvertDebugDeclareToDebugValue(DbgDeclareInst *DDI, StoreInst *SI);
 
     
     void RenamePass(BasicBlock *BB, BasicBlock *Pred,
@@ -325,7 +328,7 @@ namespace {
         }
       }
       
-      DbgDeclare = findDbgDeclare(AI);
+      DbgDeclare = FindAllocaDbgDeclare(AI);
     }
   };
 }  // end of anonymous namespace
@@ -370,8 +373,11 @@ void PromoteMem2Reg::run() {
 
       // Finally, after the scan, check to see if the store is all that is left.
       if (Info.UsingBlocks.empty()) {
-        // Record debuginfo for the store before removing it.
-        ConvertDebugDeclareToDebugValue(Info.DbgDeclare, Info.OnlyStore, 0);
+        // Record debuginfo for the store and remove the declaration's debuginfo.
+        if (DbgDeclareInst *DDI = Info.DbgDeclare) {
+          ConvertDebugDeclareToDebugValue(DDI, Info.OnlyStore);
+          DDI->eraseFromParent();
+        }
         // Remove the (now dead) store and alloca.
         Info.OnlyStore->eraseFromParent();
         LBI.deleteValue(Info.OnlyStore);
@@ -401,7 +407,8 @@ void PromoteMem2Reg::run() {
         while (!AI->use_empty()) {
           StoreInst *SI = cast<StoreInst>(AI->use_back());
           // Record debuginfo for the store before removing it.
-          ConvertDebugDeclareToDebugValue(Info.DbgDeclare, SI, 0);
+          if (DbgDeclareInst *DDI = Info.DbgDeclare)
+            ConvertDebugDeclareToDebugValue(DDI, SI);
           SI->eraseFromParent();
           LBI.deleteValue(SI);
         }
@@ -413,6 +420,10 @@ void PromoteMem2Reg::run() {
         // The alloca has been processed, move on.
         RemoveFromAllocasList(AllocaNum);
         
+        // The alloca's debuginfo can be removed as well.
+        if (DbgDeclareInst *DDI = Info.DbgDeclare)
+          DDI->eraseFromParent();
+
         ++NumLocalPromoted;
         continue;
       }
@@ -488,7 +499,11 @@ void PromoteMem2Reg::run() {
     A->eraseFromParent();
   }
 
-  
+  // Remove alloca's dbg.declare instrinsics from the function.
+  for (unsigned i = 0, e = AllocaDbgDeclares.size(); i != e; ++i)
+    if (DbgDeclareInst *DDI = AllocaDbgDeclares[i])
+      DDI->eraseFromParent();
+
   // Loop over all of the PHI nodes and see if there are any that we can get
   // rid of because they merge all of the same incoming values.  This can
   // happen due to undef values coming into the PHI nodes.  This process is
@@ -869,18 +884,19 @@ void PromoteMem2Reg::PromoteSingleBlockAlloca(AllocaInst *AI, AllocaInfo &Info,
 // Inserts a llvm.dbg.value instrinsic before the stores to an alloca'd value
 // that has an associated llvm.dbg.decl intrinsic.
 void PromoteMem2Reg::ConvertDebugDeclareToDebugValue(DbgDeclareInst *DDI,
-                                                     StoreInst *SI,
-                                                     uint64_t Offset) {
-  if (!DDI)
-    return;
-
+                                                     StoreInst *SI) {
   DIVariable DIVar(DDI->getVariable());
   if (!DIVar.getNode())
     return;
 
   if (!DIF)
     DIF = new DIFactory(*SI->getParent()->getParent()->getParent());
-  DIF->InsertDbgValueIntrinsic(SI->getOperand(0), Offset, DIVar, SI);
+  Instruction *DbgVal = DIF->InsertDbgValueIntrinsic(SI->getOperand(0), 0,
+                                                     DIVar, SI);
+  
+  // Propagate any debug metadata from the store onto the dbg.value.
+  if (MDNode *SIMD = SI->getMetadata("dbg"))
+    DbgVal->setMetadata("dbg", SIMD);
 }
 
 // QueuePhiNode - queues a phi-node to be added to a basic-block for a specific
@@ -996,7 +1012,8 @@ NextIteration:
       // what value were we writing?
       IncomingVals[ai->second] = SI->getOperand(0);
       // Record debuginfo for the store before removing it.
-      ConvertDebugDeclareToDebugValue(AllocaDbgDeclares[ai->second], SI, 0);
+      if (DbgDeclareInst *DDI = AllocaDbgDeclares[ai->second])
+        ConvertDebugDeclareToDebugValue(DDI, SI);
       BB->getInstList().erase(SI);
     }
   }
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
index 161bf21..a31235a 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SSAUpdater.cpp
@@ -71,6 +71,50 @@ void SSAUpdater::AddAvailableValue(BasicBlock *BB, Value *V) {
   getAvailableVals(AV)[BB] = V;
 }
 
+/// IsEquivalentPHI - Check if PHI has the same incoming value as specified
+/// in ValueMapping for each predecessor block.
+static bool IsEquivalentPHI(PHINode *PHI, 
+                            DenseMap<BasicBlock*, Value*> &ValueMapping) {
+  unsigned PHINumValues = PHI->getNumIncomingValues();
+  if (PHINumValues != ValueMapping.size())
+    return false;
+
+  // Scan the phi to see if it matches.
+  for (unsigned i = 0, e = PHINumValues; i != e; ++i)
+    if (ValueMapping[PHI->getIncomingBlock(i)] !=
+        PHI->getIncomingValue(i)) {
+      return false;
+    }
+
+  return true;
+}
+
+/// GetExistingPHI - Check if BB already contains a phi node that is equivalent
+/// to the specified mapping from predecessor blocks to incoming values.
+static Value *GetExistingPHI(BasicBlock *BB,
+                             DenseMap<BasicBlock*, Value*> &ValueMapping) {
+  PHINode *SomePHI;
+  for (BasicBlock::iterator It = BB->begin();
+       (SomePHI = dyn_cast<PHINode>(It)); ++It) {
+    if (IsEquivalentPHI(SomePHI, ValueMapping))
+      return SomePHI;
+  }
+  return 0;
+}
+
+/// GetExistingPHI - Check if BB already contains an equivalent phi node.
+/// The InputIt type must be an iterator over std::pair<BasicBlock*, Value*>
+/// objects that specify the mapping from predecessor blocks to incoming values.
+template<typename InputIt>
+static Value *GetExistingPHI(BasicBlock *BB, const InputIt &I,
+                             const InputIt &E) {
+  // Avoid create the mapping if BB has no phi nodes at all.
+  if (!isa<PHINode>(BB->begin()))
+    return 0;
+  DenseMap<BasicBlock*, Value*> ValueMapping(I, E);
+  return GetExistingPHI(BB, ValueMapping);
+}
+
 /// GetValueAtEndOfBlock - Construct SSA form, materializing a value that is
 /// live at the end of the specified block.
 Value *SSAUpdater::GetValueAtEndOfBlock(BasicBlock *BB) {
@@ -149,28 +193,11 @@ Value *SSAUpdater::GetValueInMiddleOfBlock(BasicBlock *BB) {
   if (SingularValue != 0)
     return SingularValue;
 
-  // Otherwise, we do need a PHI: check to see if we already have one available
-  // in this block that produces the right value.
-  if (isa<PHINode>(BB->begin())) {
-    DenseMap<BasicBlock*, Value*> ValueMapping(PredValues.begin(),
-                                               PredValues.end());
-    PHINode *SomePHI;
-    for (BasicBlock::iterator It = BB->begin();
-         (SomePHI = dyn_cast<PHINode>(It)); ++It) {
-      // Scan this phi to see if it is what we need.
-      bool Equal = true;
-      for (unsigned i = 0, e = SomePHI->getNumIncomingValues(); i != e; ++i)
-        if (ValueMapping[SomePHI->getIncomingBlock(i)] !=
-            SomePHI->getIncomingValue(i)) {
-          Equal = false;
-          break;
-        }
-         
-      if (Equal)
-        return SomePHI;
-    }
-  }
-  
+  // Otherwise, we do need a PHI.
+  if (Value *ExistingPHI = GetExistingPHI(BB, PredValues.begin(),
+                                          PredValues.end()))
+    return ExistingPHI;
+
   // Ok, we have no way out, insert a new one now.
   PHINode *InsertedPHI = PHINode::Create(PrototypeValue->getType(),
                                          PrototypeValue->getName(),
@@ -255,7 +282,7 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
   // producing the same value.  If so, this value will capture it, if not, it
   // will get reset to null.  We distinguish the no-predecessor case explicitly
   // below.
-  TrackingVH<Value> SingularValue;
+  TrackingVH<Value> ExistingValue;
 
   // We can get our predecessor info by walking the pred_iterator list, but it
   // is relatively slow.  If we already have PHI nodes in this block, walk one
@@ -266,11 +293,11 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
       Value *PredVal = GetValueAtEndOfBlockInternal(PredBB);
       IncomingPredInfo.push_back(std::make_pair(PredBB, PredVal));
 
-      // Compute SingularValue.
+      // Set ExistingValue to singular value from all predecessors so far.
       if (i == 0)
-        SingularValue = PredVal;
-      else if (PredVal != SingularValue)
-        SingularValue = 0;
+        ExistingValue = PredVal;
+      else if (PredVal != ExistingValue)
+        ExistingValue = 0;
     }
   } else {
     bool isFirstPred = true;
@@ -279,12 +306,12 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
       Value *PredVal = GetValueAtEndOfBlockInternal(PredBB);
       IncomingPredInfo.push_back(std::make_pair(PredBB, PredVal));
 
-      // Compute SingularValue.
+      // Set ExistingValue to singular value from all predecessors so far.
       if (isFirstPred) {
-        SingularValue = PredVal;
+        ExistingValue = PredVal;
         isFirstPred = false;
-      } else if (PredVal != SingularValue)
-        SingularValue = 0;
+      } else if (PredVal != ExistingValue)
+        ExistingValue = 0;
     }
   }
 
@@ -300,31 +327,38 @@ Value *SSAUpdater::GetValueAtEndOfBlockInternal(BasicBlock *BB) {
   /// above.
   TrackingVH<Value> &InsertedVal = AvailableVals[BB];
 
-  // If all the predecessor values are the same then we don't need to insert a
+  // If the predecessor values are not all the same, then check to see if there
+  // is an existing PHI that can be used.
+  if (!ExistingValue)
+    ExistingValue = GetExistingPHI(BB,
+                                   IncomingPredInfo.begin()+FirstPredInfoEntry,
+                                   IncomingPredInfo.end());
+
+  // If there is an existing value we can use, then we don't need to insert a
   // PHI.  This is the simple and common case.
-  if (SingularValue) {
-    // If a PHI node got inserted, replace it with the singlar value and delete
+  if (ExistingValue) {
+    // If a PHI node got inserted, replace it with the existing value and delete
     // it.
     if (InsertedVal) {
       PHINode *OldVal = cast<PHINode>(InsertedVal);
       // Be careful about dead loops.  These RAUW's also update InsertedVal.
-      if (InsertedVal != SingularValue)
-        OldVal->replaceAllUsesWith(SingularValue);
+      if (InsertedVal != ExistingValue)
+        OldVal->replaceAllUsesWith(ExistingValue);
       else
         OldVal->replaceAllUsesWith(UndefValue::get(InsertedVal->getType()));
       OldVal->eraseFromParent();
     } else {
-      InsertedVal = SingularValue;
+      InsertedVal = ExistingValue;
     }
 
-    // Either path through the 'if' should have set insertedVal -> SingularVal.
-    assert((InsertedVal == SingularValue || isa<UndefValue>(InsertedVal)) &&
-           "RAUW didn't change InsertedVal to be SingularVal");
+    // Either path through the 'if' should have set InsertedVal -> ExistingVal.
+    assert((InsertedVal == ExistingValue || isa<UndefValue>(InsertedVal)) &&
+           "RAUW didn't change InsertedVal to be ExistingValue");
 
     // Drop the entries we added in IncomingPredInfo to restore the stack.
     IncomingPredInfo.erase(IncomingPredInfo.begin()+FirstPredInfoEntry,
                            IncomingPredInfo.end());
-    return SingularValue;
+    return ExistingValue;
   }
 
   // Otherwise, we do need a PHI: insert one now if we don't already have one.
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
index cb53296..795b6bf 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
@@ -23,6 +23,7 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/raw_ostream.h"
 #include "llvm/Analysis/ConstantFolding.h"
+#include "llvm/Target/TargetData.h"
 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/SmallVector.h"
@@ -36,6 +37,28 @@ using namespace llvm;
 
 STATISTIC(NumSpeculations, "Number of speculative executed instructions");
 
+namespace {
+class SimplifyCFGOpt {
+  const TargetData *const TD;
+
+  ConstantInt *GetConstantInt(Value *V);
+  Value *GatherConstantSetEQs(Value *V, std::vector<ConstantInt*> &Values);
+  Value *GatherConstantSetNEs(Value *V, std::vector<ConstantInt*> &Values);
+  bool GatherValueComparisons(Instruction *Cond, Value *&CompVal,
+                              std::vector<ConstantInt*> &Values);
+  Value *isValueEqualityComparison(TerminatorInst *TI);
+  BasicBlock *GetValueEqualityComparisonCases(TerminatorInst *TI,
+    std::vector<std::pair<ConstantInt*, BasicBlock*> > &Cases);
+  bool SimplifyEqualityComparisonWithOnlyPredecessor(TerminatorInst *TI,
+                                                     BasicBlock *Pred);
+  bool FoldValueComparisonIntoPredecessors(TerminatorInst *TI);
+
+public:
+  explicit SimplifyCFGOpt(const TargetData *td) : TD(td) {}
+  bool run(BasicBlock *BB);
+};
+}
+
 /// SafeToMergeTerminators - Return true if it is safe to merge these two
 /// terminator instructions together.
 ///
@@ -243,17 +266,48 @@ static bool DominatesMergePoint(Value *V, BasicBlock *BB,
   return true;
 }
 
+/// GetConstantInt - Extract ConstantInt from value, looking through IntToPtr
+/// and PointerNullValue. Return NULL if value is not a constant int.
+ConstantInt *SimplifyCFGOpt::GetConstantInt(Value *V) {
+  // Normal constant int.
+  ConstantInt *CI = dyn_cast<ConstantInt>(V);
+  if (CI || !TD || !isa<Constant>(V) || !isa<PointerType>(V->getType()))
+    return CI;
+
+  // This is some kind of pointer constant. Turn it into a pointer-sized
+  // ConstantInt if possible.
+  const IntegerType *PtrTy = TD->getIntPtrType(V->getContext());
+
+  // Null pointer means 0, see SelectionDAGBuilder::getValue(const Value*).
+  if (isa<ConstantPointerNull>(V))
+    return ConstantInt::get(PtrTy, 0);
+
+  // IntToPtr const int.
+  if (ConstantExpr *CE = dyn_cast<ConstantExpr>(V))
+    if (CE->getOpcode() == Instruction::IntToPtr)
+      if (ConstantInt *CI = dyn_cast<ConstantInt>(CE->getOperand(0))) {
+        // The constant is very likely to have the right type already.
+        if (CI->getType() == PtrTy)
+          return CI;
+        else
+          return cast<ConstantInt>
+            (ConstantExpr::getIntegerCast(CI, PtrTy, /*isSigned=*/false));
+      }
+  return 0;
+}
+
 /// GatherConstantSetEQs - Given a potentially 'or'd together collection of
 /// icmp_eq instructions that compare a value against a constant, return the
 /// value being compared, and stick the constant into the Values vector.
-static Value *GatherConstantSetEQs(Value *V, std::vector<ConstantInt*> &Values){
+Value *SimplifyCFGOpt::
+GatherConstantSetEQs(Value *V, std::vector<ConstantInt*> &Values) {
   if (Instruction *Inst = dyn_cast<Instruction>(V)) {
     if (Inst->getOpcode() == Instruction::ICmp &&
         cast<ICmpInst>(Inst)->getPredicate() == ICmpInst::ICMP_EQ) {
-      if (ConstantInt *C = dyn_cast<ConstantInt>(Inst->getOperand(1))) {
+      if (ConstantInt *C = GetConstantInt(Inst->getOperand(1))) {
         Values.push_back(C);
         return Inst->getOperand(0);
-      } else if (ConstantInt *C = dyn_cast<ConstantInt>(Inst->getOperand(0))) {
+      } else if (ConstantInt *C = GetConstantInt(Inst->getOperand(0))) {
         Values.push_back(C);
         return Inst->getOperand(1);
       }
@@ -270,14 +324,15 @@ static Value *GatherConstantSetEQs(Value *V, std::vector<ConstantInt*> &Values){
 /// GatherConstantSetNEs - Given a potentially 'and'd together collection of
 /// setne instructions that compare a value against a constant, return the value
 /// being compared, and stick the constant into the Values vector.
-static Value *GatherConstantSetNEs(Value *V, std::vector<ConstantInt*> &Values){
+Value *SimplifyCFGOpt::
+GatherConstantSetNEs(Value *V, std::vector<ConstantInt*> &Values) {
   if (Instruction *Inst = dyn_cast<Instruction>(V)) {
     if (Inst->getOpcode() == Instruction::ICmp &&
                cast<ICmpInst>(Inst)->getPredicate() == ICmpInst::ICMP_NE) {
-      if (ConstantInt *C = dyn_cast<ConstantInt>(Inst->getOperand(1))) {
+      if (ConstantInt *C = GetConstantInt(Inst->getOperand(1))) {
         Values.push_back(C);
         return Inst->getOperand(0);
-      } else if (ConstantInt *C = dyn_cast<ConstantInt>(Inst->getOperand(0))) {
+      } else if (ConstantInt *C = GetConstantInt(Inst->getOperand(0))) {
         Values.push_back(C);
         return Inst->getOperand(1);
       }
@@ -294,8 +349,8 @@ static Value *GatherConstantSetNEs(Value *V, std::vector<ConstantInt*> &Values){
 /// GatherValueComparisons - If the specified Cond is an 'and' or 'or' of a
 /// bunch of comparisons of one value against constants, return the value and
 /// the constants being compared.
-static bool GatherValueComparisons(Instruction *Cond, Value *&CompVal,
-                                   std::vector<ConstantInt*> &Values) {
+bool SimplifyCFGOpt::GatherValueComparisons(Instruction *Cond, Value *&CompVal,
+                                            std::vector<ConstantInt*> &Values) {
   if (Cond->getOpcode() == Instruction::Or) {
     CompVal = GatherConstantSetEQs(Cond, Values);
 
@@ -327,29 +382,32 @@ static void EraseTerminatorInstAndDCECond(TerminatorInst *TI) {
 
 /// isValueEqualityComparison - Return true if the specified terminator checks
 /// to see if a value is equal to constant integer value.
-static Value *isValueEqualityComparison(TerminatorInst *TI) {
+Value *SimplifyCFGOpt::isValueEqualityComparison(TerminatorInst *TI) {
+  Value *CV = 0;
   if (SwitchInst *SI = dyn_cast<SwitchInst>(TI)) {
     // Do not permit merging of large switch instructions into their
     // predecessors unless there is only one predecessor.
-    if (SI->getNumSuccessors() * std::distance(pred_begin(SI->getParent()),
-                                               pred_end(SI->getParent())) > 128)
-      return 0;
-
-    return SI->getCondition();
-  }
-  if (BranchInst *BI = dyn_cast<BranchInst>(TI))
+    if (SI->getNumSuccessors()*std::distance(pred_begin(SI->getParent()),
+                                             pred_end(SI->getParent())) <= 128)
+      CV = SI->getCondition();
+  } else if (BranchInst *BI = dyn_cast<BranchInst>(TI))
     if (BI->isConditional() && BI->getCondition()->hasOneUse())
       if (ICmpInst *ICI = dyn_cast<ICmpInst>(BI->getCondition()))
         if ((ICI->getPredicate() == ICmpInst::ICMP_EQ ||
              ICI->getPredicate() == ICmpInst::ICMP_NE) &&
-            isa<ConstantInt>(ICI->getOperand(1)))
-          return ICI->getOperand(0);
-  return 0;
+            GetConstantInt(ICI->getOperand(1)))
+          CV = ICI->getOperand(0);
+
+  // Unwrap any lossless ptrtoint cast.
+  if (TD && CV && CV->getType() == TD->getIntPtrType(CV->getContext()))
+    if (PtrToIntInst *PTII = dyn_cast<PtrToIntInst>(CV))
+      CV = PTII->getOperand(0);
+  return CV;
 }
 
 /// GetValueEqualityComparisonCases - Given a value comparison instruction,
 /// decode all of the 'cases' that it represents and return the 'default' block.
-static BasicBlock *
+BasicBlock *SimplifyCFGOpt::
 GetValueEqualityComparisonCases(TerminatorInst *TI,
                                 std::vector<std::pair<ConstantInt*,
                                                       BasicBlock*> > &Cases) {
@@ -362,7 +420,7 @@ GetValueEqualityComparisonCases(TerminatorInst *TI,
 
   BranchInst *BI = cast<BranchInst>(TI);
   ICmpInst *ICI = cast<ICmpInst>(BI->getCondition());
-  Cases.push_back(std::make_pair(cast<ConstantInt>(ICI->getOperand(1)),
+  Cases.push_back(std::make_pair(GetConstantInt(ICI->getOperand(1)),
                                  BI->getSuccessor(ICI->getPredicate() ==
                                                   ICmpInst::ICMP_NE)));
   return BI->getSuccessor(ICI->getPredicate() == ICmpInst::ICMP_EQ);
@@ -421,8 +479,9 @@ ValuesOverlap(std::vector<std::pair<ConstantInt*, BasicBlock*> > &C1,
 /// comparison with the same value, and if that comparison determines the
 /// outcome of this comparison.  If so, simplify TI.  This does a very limited
 /// form of jump threading.
-static bool SimplifyEqualityComparisonWithOnlyPredecessor(TerminatorInst *TI,
-                                                          BasicBlock *Pred) {
+bool SimplifyCFGOpt::
+SimplifyEqualityComparisonWithOnlyPredecessor(TerminatorInst *TI,
+                                              BasicBlock *Pred) {
   Value *PredVal = isValueEqualityComparison(Pred->getTerminator());
   if (!PredVal) return false;  // Not a value comparison in predecessor.
 
@@ -548,7 +607,7 @@ namespace {
 /// equality comparison instruction (either a switch or a branch on "X == c").
 /// See if any of the predecessors of the terminator block are value comparisons
 /// on the same value.  If so, and if safe to do so, fold them together.
-static bool FoldValueComparisonIntoPredecessors(TerminatorInst *TI) {
+bool SimplifyCFGOpt::FoldValueComparisonIntoPredecessors(TerminatorInst *TI) {
   BasicBlock *BB = TI->getParent();
   Value *CV = isValueEqualityComparison(TI);  // CondVal
   assert(CV && "Not a comparison?");
@@ -641,6 +700,13 @@ static bool FoldValueComparisonIntoPredecessors(TerminatorInst *TI) {
       for (unsigned i = 0, e = NewSuccessors.size(); i != e; ++i)
         AddPredecessorToBlock(NewSuccessors[i], Pred, BB);
 
+      // Convert pointer to int before we switch.
+      if (isa<PointerType>(CV->getType())) {
+        assert(TD && "Cannot switch on pointer without TargetData");
+        CV = new PtrToIntInst(CV, TD->getIntPtrType(CV->getContext()),
+                              "magicptr", PTI);
+      }
+
       // Now that the successors are updated, create the new Switch instruction.
       SwitchInst *NewSI = SwitchInst::Create(CV, PredDefault,
                                              PredCases.size(), PTI);
@@ -1589,14 +1655,7 @@ static bool SimplifyCondBranchToCondBranch(BranchInst *PBI, BranchInst *BI) {
   return true;
 }
 
-/// SimplifyCFG - This function is used to do simplification of a CFG.  For
-/// example, it adjusts branches to branches to eliminate the extra hop, it
-/// eliminates unreachable basic blocks, and does other "peephole" optimization
-/// of the CFG.  It returns true if a modification was made.
-///
-/// WARNING:  The entry node of a function may not be simplified.
-///
-bool llvm::SimplifyCFG(BasicBlock *BB) {
+bool SimplifyCFGOpt::run(BasicBlock *BB) {
   bool Changed = false;
   Function *M = BB->getParent();
 
@@ -1997,7 +2056,7 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
         Value *CompVal = 0;
         std::vector<ConstantInt*> Values;
         bool TrueWhenEqual = GatherValueComparisons(Cond, CompVal, Values);
-        if (CompVal && CompVal->getType()->isInteger()) {
+        if (CompVal) {
           // There might be duplicate constants in the list, which the switch
           // instruction can't handle, remove them now.
           std::sort(Values.begin(), Values.end(), ConstantIntOrdering());
@@ -2008,6 +2067,14 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
           BasicBlock *EdgeBB    = BI->getSuccessor(0);
           if (!TrueWhenEqual) std::swap(DefaultBB, EdgeBB);
 
+          // Convert pointer to int before we switch.
+          if (isa<PointerType>(CompVal->getType())) {
+            assert(TD && "Cannot switch on pointer without TargetData");
+            CompVal = new PtrToIntInst(CompVal,
+                                       TD->getIntPtrType(CompVal->getContext()),
+                                       "magicptr", BI);
+          }
+
           // Create the new switch instruction now.
           SwitchInst *New = SwitchInst::Create(CompVal, DefaultBB,
                                                Values.size(), BI);
@@ -2035,3 +2102,14 @@ bool llvm::SimplifyCFG(BasicBlock *BB) {
 
   return Changed;
 }
+
+/// SimplifyCFG - This function is used to do simplification of a CFG.  For
+/// example, it adjusts branches to branches to eliminate the extra hop, it
+/// eliminates unreachable basic blocks, and does other "peephole" optimization
+/// of the CFG.  It returns true if a modification was made.
+///
+/// WARNING:  The entry node of a function may not be simplified.
+///
+bool llvm::SimplifyCFG(BasicBlock *BB, const TargetData *TD) {
+  return SimplifyCFGOpt(TD).run(BB);
+}
diff --git a/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp b/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
index a6e6701..6045048 100644
--- a/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
+++ b/libclamav/c++/llvm/lib/Transforms/Utils/ValueMapper.cpp
@@ -35,7 +35,7 @@ Value *llvm::MapValue(const Value *V, ValueMapTy &VM) {
 
   if (const MDNode *MD = dyn_cast<MDNode>(V)) {
     SmallVector<Value*, 4> Elts;
-    for (unsigned i = 0; i != MD->getNumOperands(); i++)
+    for (unsigned i = 0, e = MD->getNumOperands(); i != e; ++i)
       Elts.push_back(MD->getOperand(i) ? MapValue(MD->getOperand(i), VM) : 0);
     return VM[V] = MDNode::get(V->getContext(), Elts.data(), Elts.size());
   }
diff --git a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
index c9f3849..4fe1eee 100644
--- a/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/AsmWriter.cpp
@@ -27,6 +27,7 @@
 #include "llvm/ValueSymbolTable.h"
 #include "llvm/TypeSymbolTable.h"
 #include "llvm/ADT/DenseSet.h"
+#include "llvm/ADT/SmallString.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/Support/CFG.h"
@@ -238,6 +239,19 @@ void TypePrinting::CalcTypeName(const Type *Ty,
       OS << '>';
     break;
   }
+  case Type::UnionTyID: {
+    const UnionType *UTy = cast<UnionType>(Ty);
+    OS << "union { ";
+    for (StructType::element_iterator I = UTy->element_begin(),
+         E = UTy->element_end(); I != E; ++I) {
+      CalcTypeName(*I, TypeStack, OS);
+      if (next(I) != UTy->element_end())
+        OS << ',';
+      OS << ' ';
+    }
+    OS << '}';
+    break;
+  }
   case Type::PointerTyID: {
     const PointerType *PTy = cast<PointerType>(Ty);
     CalcTypeName(PTy->getElementType(), TypeStack, OS);
@@ -855,7 +869,8 @@ static void WriteConstantInt(raw_ostream &Out, const Constant *CV,
       bool isDouble = &CFP->getValueAPF().getSemantics()==&APFloat::IEEEdouble;
       double Val = isDouble ? CFP->getValueAPF().convertToDouble() :
                               CFP->getValueAPF().convertToFloat();
-      std::string StrVal = ftostr(CFP->getValueAPF());
+      SmallString<128> StrVal;
+      raw_svector_ostream(StrVal) << Val;
 
       // Check to make sure that the stringized number is not some string like
       // "Inf" or NaN, that atof will accept, but the lexer will not.  Check
@@ -866,7 +881,7 @@ static void WriteConstantInt(raw_ostream &Out, const Constant *CV,
            (StrVal[1] >= '0' && StrVal[1] <= '9'))) {
         // Reparse stringized version!
         if (atof(StrVal.c_str()) == Val) {
-          Out << StrVal;
+          Out << StrVal.str();
           return;
         }
       }
@@ -1250,15 +1265,14 @@ public:
   void printArgument(const Argument *FA, Attributes Attrs);
   void printBasicBlock(const BasicBlock *BB);
   void printInstruction(const Instruction &I);
-private:
 
+private:
   // printInfoComment - Print a little comment after the instruction indicating
   // which slot it occupies.
   void printInfoComment(const Value &V);
 };
 }  // end of anonymous namespace
 
-
 void AssemblyWriter::writeOperand(const Value *Operand, bool PrintType) {
   if (Operand == 0) {
     Out << "<null operand!>";
@@ -1402,8 +1416,6 @@ static void PrintLinkage(GlobalValue::LinkageTypes LT,
   case GlobalValue::AvailableExternallyLinkage:
     Out << "available_externally ";
     break;
-    // This is invalid syntax and just a debugging aid.
-  case GlobalValue::GhostLinkage:	  Out << "ghost ";	    break;
   }
 }
 
@@ -1418,6 +1430,9 @@ static void PrintVisibility(GlobalValue::VisibilityTypes Vis,
 }
 
 void AssemblyWriter::printGlobal(const GlobalVariable *GV) {
+  if (GV->isMaterializable())
+    Out << "; Materializable\n";
+
   WriteAsOperandInternal(Out, GV, &TypePrinter, &Machine);
   Out << " = ";
 
@@ -1448,6 +1463,9 @@ void AssemblyWriter::printGlobal(const GlobalVariable *GV) {
 }
 
 void AssemblyWriter::printAlias(const GlobalAlias *GA) {
+  if (GA->isMaterializable())
+    Out << "; Materializable\n";
+
   // Don't crash when dumping partially built GA
   if (!GA->hasName())
     Out << "<<nameless>> = ";
@@ -1521,6 +1539,9 @@ void AssemblyWriter::printFunction(const Function *F) {
 
   if (AnnotationWriter) AnnotationWriter->emitFunctionAnnot(F, Out);
 
+  if (F->isMaterializable())
+    Out << "; Materializable\n";
+
   if (F->isDeclaration())
     Out << "declare ";
   else
@@ -1680,11 +1701,15 @@ void AssemblyWriter::printBasicBlock(const BasicBlock *BB) {
   if (AnnotationWriter) AnnotationWriter->emitBasicBlockEndAnnot(BB, Out);
 }
 
-
 /// printInfoComment - Print a little comment after the instruction indicating
 /// which slot it occupies.
 ///
 void AssemblyWriter::printInfoComment(const Value &V) {
+  if (AnnotationWriter) {
+    AnnotationWriter->printInfoComment(V, Out);
+    return;
+  }
+
   if (V.getType()->isVoidTy()) return;
   
   Out.PadToColumn(50);
diff --git a/libclamav/c++/llvm/lib/VMCore/Attributes.cpp b/libclamav/c++/llvm/lib/VMCore/Attributes.cpp
index 65155f1..6fa597e 100644
--- a/libclamav/c++/llvm/lib/VMCore/Attributes.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Attributes.cpp
@@ -56,6 +56,8 @@ std::string Attribute::getAsString(Attributes Attrs) {
     Result += "optsize ";
   if (Attrs & Attribute::NoInline)
     Result += "noinline ";
+  if (Attrs & Attribute::InlineHint)
+    Result += "inlinehint ";
   if (Attrs & Attribute::AlwaysInline)
     Result += "alwaysinline ";
   if (Attrs & Attribute::StackProtect)
@@ -68,6 +70,11 @@ std::string Attribute::getAsString(Attributes Attrs) {
     Result += "noimplicitfloat ";
   if (Attrs & Attribute::Naked)
     Result += "naked ";
+  if (Attrs & Attribute::StackAlignment) {
+    Result += "alignstack(";
+    Result += utostr(Attribute::getStackAlignmentFromAttrs(Attrs));
+    Result += ") ";
+  }
   if (Attrs & Attribute::Alignment) {
     Result += "align ";
     Result += utostr(Attribute::getAlignmentFromAttrs(Attrs));
diff --git a/libclamav/c++/llvm/lib/VMCore/CMakeLists.txt b/libclamav/c++/llvm/lib/VMCore/CMakeLists.txt
index 5ecedf1..4b80e36 100644
--- a/libclamav/c++/llvm/lib/VMCore/CMakeLists.txt
+++ b/libclamav/c++/llvm/lib/VMCore/CMakeLists.txt
@@ -8,17 +8,17 @@ add_llvm_library(LLVMCore
   Core.cpp
   Dominators.cpp
   Function.cpp
+  GVMaterializer.cpp
   Globals.cpp
+  IRBuilder.cpp
   InlineAsm.cpp
   Instruction.cpp
   Instructions.cpp
   IntrinsicInst.cpp
-  IRBuilder.cpp
   LLVMContext.cpp
   LeakDetector.cpp
   Metadata.cpp
   Module.cpp
-  ModuleProvider.cpp
   Pass.cpp
   PassManager.cpp
   PrintModulePass.cpp
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
index ddd5587..4a245d2 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantFold.cpp
@@ -24,7 +24,6 @@
 #include "llvm/Function.h"
 #include "llvm/GlobalAlias.h"
 #include "llvm/GlobalVariable.h"
-#include "llvm/LLVMContext.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/Support/Compiler.h"
 #include "llvm/Support/ErrorHandling.h"
@@ -41,7 +40,7 @@ using namespace llvm;
 /// BitCastConstantVector - Convert the specified ConstantVector node to the
 /// specified vector type.  At this point, we know that the elements of the
 /// input vector constant are all simple integer or FP values.
-static Constant *BitCastConstantVector(LLVMContext &Context, ConstantVector *CV,
+static Constant *BitCastConstantVector(ConstantVector *CV,
                                        const VectorType *DstTy) {
   // If this cast changes element count then we can't handle it here:
   // doing so requires endianness information.  This should be handled by
@@ -91,8 +90,7 @@ foldConstantCastPair(
                                         Type::getInt64Ty(DstTy->getContext()));
 }
 
-static Constant *FoldBitCast(LLVMContext &Context, 
-                             Constant *V, const Type *DestTy) {
+static Constant *FoldBitCast(Constant *V, const Type *DestTy) {
   const Type *SrcTy = V->getType();
   if (SrcTy == DestTy)
     return V; // no-op cast
@@ -103,7 +101,8 @@ static Constant *FoldBitCast(LLVMContext &Context,
     if (const PointerType *DPTy = dyn_cast<PointerType>(DestTy))
       if (PTy->getAddressSpace() == DPTy->getAddressSpace()) {
         SmallVector<Value*, 8> IdxList;
-        Value *Zero = Constant::getNullValue(Type::getInt32Ty(Context));
+        Value *Zero =
+          Constant::getNullValue(Type::getInt32Ty(DPTy->getContext()));
         IdxList.push_back(Zero);
         const Type *ElTy = PTy->getElementType();
         while (ElTy != DPTy->getElementType()) {
@@ -139,15 +138,14 @@ static Constant *FoldBitCast(LLVMContext &Context,
         return Constant::getNullValue(DestTy);
 
       if (ConstantVector *CV = dyn_cast<ConstantVector>(V))
-        return BitCastConstantVector(Context, CV, DestPTy);
+        return BitCastConstantVector(CV, DestPTy);
     }
 
     // Canonicalize scalar-to-vector bitcasts into vector-to-vector bitcasts
     // This allows for other simplifications (although some of them
     // can only be handled by Analysis/ConstantFolding.cpp).
     if (isa<ConstantInt>(V) || isa<ConstantFP>(V))
-      return ConstantExpr::getBitCast(
-                                     ConstantVector::get(&V, 1), DestPTy);
+      return ConstantExpr::getBitCast(ConstantVector::get(&V, 1), DestPTy);
   }
 
   // Finally, implement bitcast folding now.   The code below doesn't handle
@@ -163,17 +161,18 @@ static Constant *FoldBitCast(LLVMContext &Context,
       return V;
 
     if (DestTy->isFloatingPoint())
-      return ConstantFP::get(Context, APFloat(CI->getValue(),
-                                     DestTy != Type::getPPC_FP128Ty(Context)));
+      return ConstantFP::get(DestTy->getContext(),
+                             APFloat(CI->getValue(),
+                                     !DestTy->isPPC_FP128Ty()));
 
     // Otherwise, can't fold this (vector?)
     return 0;
   }
 
-  // Handle ConstantFP input.
+  // Handle ConstantFP input: FP -> Integral.
   if (ConstantFP *FP = dyn_cast<ConstantFP>(V))
-    // FP -> Integral.
-    return ConstantInt::get(Context, FP->getValueAPF().bitcastToAPInt());
+    return ConstantInt::get(FP->getContext(),
+                            FP->getValueAPF().bitcastToAPInt());
 
   return 0;
 }
@@ -323,9 +322,195 @@ static Constant *ExtractConstantBytes(Constant *C, unsigned ByteStart,
   }
 }
 
+/// getFoldedSizeOf - Return a ConstantExpr with type DestTy for sizeof
+/// on Ty, with any known factors factored out. If Folded is false,
+/// return null if no factoring was possible, to avoid endlessly
+/// bouncing an unfoldable expression back into the top-level folder.
+///
+static Constant *getFoldedSizeOf(const Type *Ty, const Type *DestTy,
+                                 bool Folded) {
+  if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
+    Constant *N = ConstantInt::get(DestTy, ATy->getNumElements());
+    Constant *E = getFoldedSizeOf(ATy->getElementType(), DestTy, true);
+    return ConstantExpr::getNUWMul(E, N);
+  }
+  if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
+    Constant *N = ConstantInt::get(DestTy, VTy->getNumElements());
+    Constant *E = getFoldedSizeOf(VTy->getElementType(), DestTy, true);
+    return ConstantExpr::getNUWMul(E, N);
+  }
+  if (const StructType *STy = dyn_cast<StructType>(Ty))
+    if (!STy->isPacked()) {
+      unsigned NumElems = STy->getNumElements();
+      // An empty struct has size zero.
+      if (NumElems == 0)
+        return ConstantExpr::getNullValue(DestTy);
+      // Check for a struct with all members having the same size.
+      Constant *MemberSize =
+        getFoldedSizeOf(STy->getElementType(0), DestTy, true);
+      bool AllSame = true;
+      for (unsigned i = 1; i != NumElems; ++i)
+        if (MemberSize !=
+            getFoldedSizeOf(STy->getElementType(i), DestTy, true)) {
+          AllSame = false;
+          break;
+        }
+      if (AllSame) {
+        Constant *N = ConstantInt::get(DestTy, NumElems);
+        return ConstantExpr::getNUWMul(MemberSize, N);
+      }
+    }
+
+  // Pointer size doesn't depend on the pointee type, so canonicalize them
+  // to an arbitrary pointee.
+  if (const PointerType *PTy = dyn_cast<PointerType>(Ty))
+    if (!PTy->getElementType()->isInteger(1))
+      return
+        getFoldedSizeOf(PointerType::get(IntegerType::get(PTy->getContext(), 1),
+                                         PTy->getAddressSpace()),
+                        DestTy, true);
+
+  // If there's no interesting folding happening, bail so that we don't create
+  // a constant that looks like it needs folding but really doesn't.
+  if (!Folded)
+    return 0;
+
+  // Base case: Get a regular sizeof expression.
+  Constant *C = ConstantExpr::getSizeOf(Ty);
+  C = ConstantExpr::getCast(CastInst::getCastOpcode(C, false,
+                                                    DestTy, false),
+                            C, DestTy);
+  return C;
+}
+
+/// getFoldedAlignOf - Return a ConstantExpr with type DestTy for alignof
+/// on Ty, with any known factors factored out. If Folded is false,
+/// return null if no factoring was possible, to avoid endlessly
+/// bouncing an unfoldable expression back into the top-level folder.
+///
+static Constant *getFoldedAlignOf(const Type *Ty, const Type *DestTy,
+                                  bool Folded) {
+  // The alignment of an array is equal to the alignment of the
+  // array element. Note that this is not always true for vectors.
+  if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
+    Constant *C = ConstantExpr::getAlignOf(ATy->getElementType());
+    C = ConstantExpr::getCast(CastInst::getCastOpcode(C, false,
+                                                      DestTy,
+                                                      false),
+                              C, DestTy);
+    return C;
+  }
+
+  if (const StructType *STy = dyn_cast<StructType>(Ty)) {
+    // Packed structs always have an alignment of 1.
+    if (STy->isPacked())
+      return ConstantInt::get(DestTy, 1);
+
+    // Otherwise, struct alignment is the maximum alignment of any member.
+    // Without target data, we can't compare much, but we can check to see
+    // if all the members have the same alignment.
+    unsigned NumElems = STy->getNumElements();
+    // An empty struct has minimal alignment.
+    if (NumElems == 0)
+      return ConstantInt::get(DestTy, 1);
+    // Check for a struct with all members having the same alignment.
+    Constant *MemberAlign =
+      getFoldedAlignOf(STy->getElementType(0), DestTy, true);
+    bool AllSame = true;
+    for (unsigned i = 1; i != NumElems; ++i)
+      if (MemberAlign != getFoldedAlignOf(STy->getElementType(i), DestTy, true)) {
+        AllSame = false;
+        break;
+      }
+    if (AllSame)
+      return MemberAlign;
+  }
+
+  // Pointer alignment doesn't depend on the pointee type, so canonicalize them
+  // to an arbitrary pointee.
+  if (const PointerType *PTy = dyn_cast<PointerType>(Ty))
+    if (!PTy->getElementType()->isInteger(1))
+      return
+        getFoldedAlignOf(PointerType::get(IntegerType::get(PTy->getContext(),
+                                                           1),
+                                          PTy->getAddressSpace()),
+                         DestTy, true);
+
+  // If there's no interesting folding happening, bail so that we don't create
+  // a constant that looks like it needs folding but really doesn't.
+  if (!Folded)
+    return 0;
+
+  // Base case: Get a regular alignof expression.
+  Constant *C = ConstantExpr::getAlignOf(Ty);
+  C = ConstantExpr::getCast(CastInst::getCastOpcode(C, false,
+                                                    DestTy, false),
+                            C, DestTy);
+  return C;
+}
+
+/// getFoldedOffsetOf - Return a ConstantExpr with type DestTy for offsetof
+/// on Ty and FieldNo, with any known factors factored out. If Folded is false,
+/// return null if no factoring was possible, to avoid endlessly
+/// bouncing an unfoldable expression back into the top-level folder.
+///
+static Constant *getFoldedOffsetOf(const Type *Ty, Constant *FieldNo,
+                                   const Type *DestTy,
+                                   bool Folded) {
+  if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
+    Constant *N = ConstantExpr::getCast(CastInst::getCastOpcode(FieldNo, false,
+                                                                DestTy, false),
+                                        FieldNo, DestTy);
+    Constant *E = getFoldedSizeOf(ATy->getElementType(), DestTy, true);
+    return ConstantExpr::getNUWMul(E, N);
+  }
+  if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
+    Constant *N = ConstantExpr::getCast(CastInst::getCastOpcode(FieldNo, false,
+                                                                DestTy, false),
+                                        FieldNo, DestTy);
+    Constant *E = getFoldedSizeOf(VTy->getElementType(), DestTy, true);
+    return ConstantExpr::getNUWMul(E, N);
+  }
+  if (const StructType *STy = dyn_cast<StructType>(Ty))
+    if (!STy->isPacked()) {
+      unsigned NumElems = STy->getNumElements();
+      // An empty struct has no members.
+      if (NumElems == 0)
+        return 0;
+      // Check for a struct with all members having the same size.
+      Constant *MemberSize =
+        getFoldedSizeOf(STy->getElementType(0), DestTy, true);
+      bool AllSame = true;
+      for (unsigned i = 1; i != NumElems; ++i)
+        if (MemberSize !=
+            getFoldedSizeOf(STy->getElementType(i), DestTy, true)) {
+          AllSame = false;
+          break;
+        }
+      if (AllSame) {
+        Constant *N = ConstantExpr::getCast(CastInst::getCastOpcode(FieldNo,
+                                                                    false,
+                                                                    DestTy,
+                                                                    false),
+                                            FieldNo, DestTy);
+        return ConstantExpr::getNUWMul(MemberSize, N);
+      }
+    }
+
+  // If there's no interesting folding happening, bail so that we don't create
+  // a constant that looks like it needs folding but really doesn't.
+  if (!Folded)
+    return 0;
+
+  // Base case: Get a regular offsetof expression.
+  Constant *C = ConstantExpr::getOffsetOf(Ty, FieldNo);
+  C = ConstantExpr::getCast(CastInst::getCastOpcode(C, false,
+                                                    DestTy, false),
+                            C, DestTy);
+  return C;
+}
 
-Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context, 
-                                            unsigned opc, Constant *V,
+Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
                                             const Type *DestTy) {
   if (isa<UndefValue>(V)) {
     // zext(undef) = 0, because the top bits will be zero.
@@ -394,7 +579,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
                   DestTy->isFP128Ty() ? APFloat::IEEEquad :
                   APFloat::Bogus,
                   APFloat::rmNearestTiesToEven, &ignored);
-      return ConstantFP::get(Context, Val);
+      return ConstantFP::get(V->getContext(), Val);
     }
     return 0; // Can't fold.
   case Instruction::FPToUI: 
@@ -407,7 +592,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       (void) V.convertToInteger(x, DestBitWidth, opc==Instruction::FPToSI,
                                 APFloat::rmTowardZero, &ignored);
       APInt Val(DestBitWidth, 2, x);
-      return ConstantInt::get(Context, Val);
+      return ConstantInt::get(FPC->getContext(), Val);
     }
     return 0; // Can't fold.
   case Instruction::IntToPtr:   //always treated as unsigned
@@ -415,9 +600,49 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       return ConstantPointerNull::get(cast<PointerType>(DestTy));
     return 0;                   // Other pointer types cannot be casted
   case Instruction::PtrToInt:   // always treated as unsigned
-    if (V->isNullValue())       // is it a null pointer value?
+    // Is it a null pointer value?
+    if (V->isNullValue())
       return ConstantInt::get(DestTy, 0);
-    return 0;                   // Other pointer types cannot be casted
+    // If this is a sizeof-like expression, pull out multiplications by
+    // known factors to expose them to subsequent folding. If it's an
+    // alignof-like expression, factor out known factors.
+    if (ConstantExpr *CE = dyn_cast<ConstantExpr>(V))
+      if (CE->getOpcode() == Instruction::GetElementPtr &&
+          CE->getOperand(0)->isNullValue()) {
+        const Type *Ty =
+          cast<PointerType>(CE->getOperand(0)->getType())->getElementType();
+        if (CE->getNumOperands() == 2) {
+          // Handle a sizeof-like expression.
+          Constant *Idx = CE->getOperand(1);
+          bool isOne = isa<ConstantInt>(Idx) && cast<ConstantInt>(Idx)->isOne();
+          if (Constant *C = getFoldedSizeOf(Ty, DestTy, !isOne)) {
+            Idx = ConstantExpr::getCast(CastInst::getCastOpcode(Idx, true,
+                                                                DestTy, false),
+                                        Idx, DestTy);
+            return ConstantExpr::getMul(C, Idx);
+          }
+        } else if (CE->getNumOperands() == 3 &&
+                   CE->getOperand(1)->isNullValue()) {
+          // Handle an alignof-like expression.
+          if (const StructType *STy = dyn_cast<StructType>(Ty))
+            if (!STy->isPacked()) {
+              ConstantInt *CI = cast<ConstantInt>(CE->getOperand(2));
+              if (CI->isOne() &&
+                  STy->getNumElements() == 2 &&
+                  STy->getElementType(0)->isInteger(1)) {
+                return getFoldedAlignOf(STy->getElementType(1), DestTy, false);
+              }
+            }
+          // Handle an offsetof-like expression.
+          if (isa<StructType>(Ty) || isa<ArrayType>(Ty) || isa<VectorType>(Ty)){
+            if (Constant *C = getFoldedOffsetOf(Ty, CE->getOperand(2),
+                                                DestTy, false))
+              return C;
+          }
+        }
+      }
+    // Other pointer types cannot be casted
+    return 0;
   case Instruction::UIToFP:
   case Instruction::SIToFP:
     if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
@@ -428,7 +653,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       (void)apf.convertFromAPInt(api, 
                                  opc==Instruction::SIToFP,
                                  APFloat::rmNearestTiesToEven);
-      return ConstantFP::get(Context, apf);
+      return ConstantFP::get(V->getContext(), apf);
     }
     return 0;
   case Instruction::ZExt:
@@ -436,7 +661,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       uint32_t BitWidth = cast<IntegerType>(DestTy)->getBitWidth();
       APInt Result(CI->getValue());
       Result.zext(BitWidth);
-      return ConstantInt::get(Context, Result);
+      return ConstantInt::get(V->getContext(), Result);
     }
     return 0;
   case Instruction::SExt:
@@ -444,7 +669,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
       uint32_t BitWidth = cast<IntegerType>(DestTy)->getBitWidth();
       APInt Result(CI->getValue());
       Result.sext(BitWidth);
-      return ConstantInt::get(Context, Result);
+      return ConstantInt::get(V->getContext(), Result);
     }
     return 0;
   case Instruction::Trunc: {
@@ -452,7 +677,7 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
     if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
       APInt Result(CI->getValue());
       Result.trunc(DestBitWidth);
-      return ConstantInt::get(Context, Result);
+      return ConstantInt::get(V->getContext(), Result);
     }
     
     // The input must be a constantexpr.  See if we can simplify this based on
@@ -466,12 +691,11 @@ Constant *llvm::ConstantFoldCastInstruction(LLVMContext &Context,
     return 0;
   }
   case Instruction::BitCast:
-    return FoldBitCast(Context, V, DestTy);
+    return FoldBitCast(V, DestTy);
   }
 }
 
-Constant *llvm::ConstantFoldSelectInstruction(LLVMContext&,
-                                              Constant *Cond,
+Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond,
                                               Constant *V1, Constant *V2) {
   if (ConstantInt *CB = dyn_cast<ConstantInt>(Cond))
     return CB->getZExtValue() ? V1 : V2;
@@ -483,8 +707,7 @@ Constant *llvm::ConstantFoldSelectInstruction(LLVMContext&,
   return 0;
 }
 
-Constant *llvm::ConstantFoldExtractElementInstruction(LLVMContext &Context,
-                                                      Constant *Val,
+Constant *llvm::ConstantFoldExtractElementInstruction(Constant *Val,
                                                       Constant *Idx) {
   if (isa<UndefValue>(Val))  // ee(undef, x) -> undef
     return UndefValue::get(cast<VectorType>(Val->getType())->getElementType());
@@ -503,8 +726,7 @@ Constant *llvm::ConstantFoldExtractElementInstruction(LLVMContext &Context,
   return 0;
 }
 
-Constant *llvm::ConstantFoldInsertElementInstruction(LLVMContext &Context,
-                                                     Constant *Val,
+Constant *llvm::ConstantFoldInsertElementInstruction(Constant *Val,
                                                      Constant *Elt,
                                                      Constant *Idx) {
   ConstantInt *CIdx = dyn_cast<ConstantInt>(Idx);
@@ -563,8 +785,7 @@ Constant *llvm::ConstantFoldInsertElementInstruction(LLVMContext &Context,
 
 /// GetVectorElement - If C is a ConstantVector, ConstantAggregateZero or Undef
 /// return the specified element value.  Otherwise return null.
-static Constant *GetVectorElement(LLVMContext &Context, Constant *C,
-                                  unsigned EltNo) {
+static Constant *GetVectorElement(Constant *C, unsigned EltNo) {
   if (ConstantVector *CV = dyn_cast<ConstantVector>(C))
     return CV->getOperand(EltNo);
 
@@ -576,8 +797,7 @@ static Constant *GetVectorElement(LLVMContext &Context, Constant *C,
   return 0;
 }
 
-Constant *llvm::ConstantFoldShuffleVectorInstruction(LLVMContext &Context,
-                                                     Constant *V1,
+Constant *llvm::ConstantFoldShuffleVectorInstruction(Constant *V1,
                                                      Constant *V2,
                                                      Constant *Mask) {
   // Undefined shuffle mask -> undefined value.
@@ -590,7 +810,7 @@ Constant *llvm::ConstantFoldShuffleVectorInstruction(LLVMContext &Context,
   // Loop over the shuffle mask, evaluating each element.
   SmallVector<Constant*, 32> Result;
   for (unsigned i = 0; i != MaskNumElts; ++i) {
-    Constant *InElt = GetVectorElement(Context, Mask, i);
+    Constant *InElt = GetVectorElement(Mask, i);
     if (InElt == 0) return 0;
 
     if (isa<UndefValue>(InElt))
@@ -600,9 +820,9 @@ Constant *llvm::ConstantFoldShuffleVectorInstruction(LLVMContext &Context,
       if (Elt >= SrcNumElts*2)
         InElt = UndefValue::get(EltTy);
       else if (Elt >= SrcNumElts)
-        InElt = GetVectorElement(Context, V2, Elt - SrcNumElts);
+        InElt = GetVectorElement(V2, Elt - SrcNumElts);
       else
-        InElt = GetVectorElement(Context, V1, Elt);
+        InElt = GetVectorElement(V1, Elt);
       if (InElt == 0) return 0;
     } else {
       // Unknown value.
@@ -614,8 +834,7 @@ Constant *llvm::ConstantFoldShuffleVectorInstruction(LLVMContext &Context,
   return ConstantVector::get(&Result[0], Result.size());
 }
 
-Constant *llvm::ConstantFoldExtractValueInstruction(LLVMContext &Context,
-                                                    Constant *Agg,
+Constant *llvm::ConstantFoldExtractValueInstruction(Constant *Agg,
                                                     const unsigned *Idxs,
                                                     unsigned NumIdx) {
   // Base case: no indices, so return the entire value.
@@ -635,19 +854,18 @@ Constant *llvm::ConstantFoldExtractValueInstruction(LLVMContext &Context,
 
   // Otherwise recurse.
   if (ConstantStruct *CS = dyn_cast<ConstantStruct>(Agg))
-    return ConstantFoldExtractValueInstruction(Context, CS->getOperand(*Idxs),
+    return ConstantFoldExtractValueInstruction(CS->getOperand(*Idxs),
                                                Idxs+1, NumIdx-1);
 
   if (ConstantArray *CA = dyn_cast<ConstantArray>(Agg))
-    return ConstantFoldExtractValueInstruction(Context, CA->getOperand(*Idxs),
+    return ConstantFoldExtractValueInstruction(CA->getOperand(*Idxs),
                                                Idxs+1, NumIdx-1);
   ConstantVector *CV = cast<ConstantVector>(Agg);
-  return ConstantFoldExtractValueInstruction(Context, CV->getOperand(*Idxs),
+  return ConstantFoldExtractValueInstruction(CV->getOperand(*Idxs),
                                              Idxs+1, NumIdx-1);
 }
 
-Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
-                                                   Constant *Agg,
+Constant *llvm::ConstantFoldInsertValueInstruction(Constant *Agg,
                                                    Constant *Val,
                                                    const unsigned *Idxs,
                                                    unsigned NumIdx) {
@@ -667,6 +885,8 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
     unsigned numOps;
     if (const ArrayType *AR = dyn_cast<ArrayType>(AggTy))
       numOps = AR->getNumElements();
+    else if (isa<UnionType>(AggTy))
+      numOps = 1;
     else
       numOps = cast<StructType>(AggTy)->getNumElements();
     
@@ -675,14 +895,18 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
       const Type *MemberTy = AggTy->getTypeAtIndex(i);
       Constant *Op =
         (*Idxs == i) ?
-        ConstantFoldInsertValueInstruction(Context, UndefValue::get(MemberTy),
+        ConstantFoldInsertValueInstruction(UndefValue::get(MemberTy),
                                            Val, Idxs+1, NumIdx-1) :
         UndefValue::get(MemberTy);
       Ops[i] = Op;
     }
     
     if (const StructType* ST = dyn_cast<StructType>(AggTy))
-      return ConstantStruct::get(Context, Ops, ST->isPacked());
+      return ConstantStruct::get(ST->getContext(), Ops, ST->isPacked());
+    if (const UnionType* UT = dyn_cast<UnionType>(AggTy)) {
+      assert(Ops.size() == 1 && "Union can only contain a single value!");
+      return ConstantUnion::get(UT, Ops[0]);
+    }
     return ConstantArray::get(cast<ArrayType>(AggTy), Ops);
   }
   
@@ -706,15 +930,14 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
       const Type *MemberTy = AggTy->getTypeAtIndex(i);
       Constant *Op =
         (*Idxs == i) ?
-        ConstantFoldInsertValueInstruction(Context, 
-                                           Constant::getNullValue(MemberTy),
+        ConstantFoldInsertValueInstruction(Constant::getNullValue(MemberTy),
                                            Val, Idxs+1, NumIdx-1) :
         Constant::getNullValue(MemberTy);
       Ops[i] = Op;
     }
     
-    if (const StructType* ST = dyn_cast<StructType>(AggTy))
-      return ConstantStruct::get(Context, Ops, ST->isPacked());
+    if (const StructType *ST = dyn_cast<StructType>(AggTy))
+      return ConstantStruct::get(ST->getContext(), Ops, ST->isPacked());
     return ConstantArray::get(cast<ArrayType>(AggTy), Ops);
   }
   
@@ -724,13 +947,12 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
     for (unsigned i = 0; i < Agg->getNumOperands(); ++i) {
       Constant *Op = cast<Constant>(Agg->getOperand(i));
       if (*Idxs == i)
-        Op = ConstantFoldInsertValueInstruction(Context, Op,
-                                                Val, Idxs+1, NumIdx-1);
+        Op = ConstantFoldInsertValueInstruction(Op, Val, Idxs+1, NumIdx-1);
       Ops[i] = Op;
     }
     
     if (const StructType* ST = dyn_cast<StructType>(Agg->getType()))
-      return ConstantStruct::get(Context, Ops, ST->isPacked());
+      return ConstantStruct::get(ST->getContext(), Ops, ST->isPacked());
     return ConstantArray::get(cast<ArrayType>(Agg->getType()), Ops);
   }
 
@@ -738,8 +960,7 @@ Constant *llvm::ConstantFoldInsertValueInstruction(LLVMContext &Context,
 }
 
 
-Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
-                                              unsigned Opcode,
+Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode,
                                               Constant *C1, Constant *C2) {
   // No compile-time operations on this type yet.
   if (C1->getType()->isPPC_FP128Ty())
@@ -896,51 +1117,51 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
       default:
         break;
       case Instruction::Add:     
-        return ConstantInt::get(Context, C1V + C2V);
+        return ConstantInt::get(CI1->getContext(), C1V + C2V);
       case Instruction::Sub:     
-        return ConstantInt::get(Context, C1V - C2V);
+        return ConstantInt::get(CI1->getContext(), C1V - C2V);
       case Instruction::Mul:     
-        return ConstantInt::get(Context, C1V * C2V);
+        return ConstantInt::get(CI1->getContext(), C1V * C2V);
       case Instruction::UDiv:
         assert(!CI2->isNullValue() && "Div by zero handled above");
-        return ConstantInt::get(Context, C1V.udiv(C2V));
+        return ConstantInt::get(CI1->getContext(), C1V.udiv(C2V));
       case Instruction::SDiv:
         assert(!CI2->isNullValue() && "Div by zero handled above");
         if (C2V.isAllOnesValue() && C1V.isMinSignedValue())
           return UndefValue::get(CI1->getType());   // MIN_INT / -1 -> undef
-        return ConstantInt::get(Context, C1V.sdiv(C2V));
+        return ConstantInt::get(CI1->getContext(), C1V.sdiv(C2V));
       case Instruction::URem:
         assert(!CI2->isNullValue() && "Div by zero handled above");
-        return ConstantInt::get(Context, C1V.urem(C2V));
+        return ConstantInt::get(CI1->getContext(), C1V.urem(C2V));
       case Instruction::SRem:
         assert(!CI2->isNullValue() && "Div by zero handled above");
         if (C2V.isAllOnesValue() && C1V.isMinSignedValue())
           return UndefValue::get(CI1->getType());   // MIN_INT % -1 -> undef
-        return ConstantInt::get(Context, C1V.srem(C2V));
+        return ConstantInt::get(CI1->getContext(), C1V.srem(C2V));
       case Instruction::And:
-        return ConstantInt::get(Context, C1V & C2V);
+        return ConstantInt::get(CI1->getContext(), C1V & C2V);
       case Instruction::Or:
-        return ConstantInt::get(Context, C1V | C2V);
+        return ConstantInt::get(CI1->getContext(), C1V | C2V);
       case Instruction::Xor:
-        return ConstantInt::get(Context, C1V ^ C2V);
+        return ConstantInt::get(CI1->getContext(), C1V ^ C2V);
       case Instruction::Shl: {
         uint32_t shiftAmt = C2V.getZExtValue();
         if (shiftAmt < C1V.getBitWidth())
-          return ConstantInt::get(Context, C1V.shl(shiftAmt));
+          return ConstantInt::get(CI1->getContext(), C1V.shl(shiftAmt));
         else
           return UndefValue::get(C1->getType()); // too big shift is undef
       }
       case Instruction::LShr: {
         uint32_t shiftAmt = C2V.getZExtValue();
         if (shiftAmt < C1V.getBitWidth())
-          return ConstantInt::get(Context, C1V.lshr(shiftAmt));
+          return ConstantInt::get(CI1->getContext(), C1V.lshr(shiftAmt));
         else
           return UndefValue::get(C1->getType()); // too big shift is undef
       }
       case Instruction::AShr: {
         uint32_t shiftAmt = C2V.getZExtValue();
         if (shiftAmt < C1V.getBitWidth())
-          return ConstantInt::get(Context, C1V.ashr(shiftAmt));
+          return ConstantInt::get(CI1->getContext(), C1V.ashr(shiftAmt));
         else
           return UndefValue::get(C1->getType()); // too big shift is undef
       }
@@ -970,19 +1191,19 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
         break;
       case Instruction::FAdd:
         (void)C3V.add(C2V, APFloat::rmNearestTiesToEven);
-        return ConstantFP::get(Context, C3V);
+        return ConstantFP::get(C1->getContext(), C3V);
       case Instruction::FSub:
         (void)C3V.subtract(C2V, APFloat::rmNearestTiesToEven);
-        return ConstantFP::get(Context, C3V);
+        return ConstantFP::get(C1->getContext(), C3V);
       case Instruction::FMul:
         (void)C3V.multiply(C2V, APFloat::rmNearestTiesToEven);
-        return ConstantFP::get(Context, C3V);
+        return ConstantFP::get(C1->getContext(), C3V);
       case Instruction::FDiv:
         (void)C3V.divide(C2V, APFloat::rmNearestTiesToEven);
-        return ConstantFP::get(Context, C3V);
+        return ConstantFP::get(C1->getContext(), C3V);
       case Instruction::FRem:
         (void)C3V.mod(C2V, APFloat::rmNearestTiesToEven);
-        return ConstantFP::get(Context, C3V);
+        return ConstantFP::get(C1->getContext(), C3V);
       }
     }
   } else if (const VectorType *VTy = dyn_cast<VectorType>(C1->getType())) {
@@ -1127,10 +1348,19 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
     }
   }
 
-  if (isa<ConstantExpr>(C1)) {
+  if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(C1)) {
     // There are many possible foldings we could do here.  We should probably
     // at least fold add of a pointer with an integer into the appropriate
     // getelementptr.  This will improve alias analysis a bit.
+
+    // Given ((a + b) + c), if (b + c) folds to something interesting, return
+    // (a + (b + c)).
+    if (Instruction::isAssociative(Opcode, C1->getType()) &&
+        CE1->getOpcode() == Opcode) {
+      Constant *T = ConstantExpr::get(Opcode, CE1->getOperand(1), C2);
+      if (!isa<ConstantExpr>(T) || cast<ConstantExpr>(T)->getOpcode() != Opcode)
+        return ConstantExpr::get(Opcode, CE1->getOperand(0), T);
+    }
   } else if (isa<ConstantExpr>(C2)) {
     // If C2 is a constant expr and C1 isn't, flop them around and fold the
     // other way if possible.
@@ -1143,7 +1373,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
     case Instruction::Or:
     case Instruction::Xor:
       // No change of opcode required.
-      return ConstantFoldBinaryInstruction(Context, Opcode, C2, C1);
+      return ConstantFoldBinaryInstruction(Opcode, C2, C1);
 
     case Instruction::Shl:
     case Instruction::LShr:
@@ -1184,7 +1414,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(LLVMContext &Context,
     case Instruction::SRem:
       // We can assume that C2 == 1.  If it were zero the result would be
       // undefined through division by zero.
-      return ConstantInt::getFalse(Context);
+      return ConstantInt::getFalse(C1->getContext());
     default:
       break;
     }
@@ -1218,8 +1448,7 @@ static bool isMaybeZeroSizedType(const Type *Ty) {
 /// first is less than the second, return -1, if the second is less than the
 /// first, return 1.  If the constants are not integral, return -2.
 ///
-static int IdxCompare(LLVMContext &Context, Constant *C1, Constant *C2, 
-                      const Type *ElTy) {
+static int IdxCompare(Constant *C1, Constant *C2,  const Type *ElTy) {
   if (C1 == C2) return 0;
 
   // Ok, we found a different index.  If they are not ConstantInt, we can't do
@@ -1230,10 +1459,10 @@ static int IdxCompare(LLVMContext &Context, Constant *C1, Constant *C2,
   // Ok, we have two differing integer indices.  Sign extend them to be the same
   // type.  Long is always big enough, so we use it.
   if (!C1->getType()->isInteger(64))
-    C1 = ConstantExpr::getSExt(C1, Type::getInt64Ty(Context));
+    C1 = ConstantExpr::getSExt(C1, Type::getInt64Ty(C1->getContext()));
 
   if (!C2->getType()->isInteger(64))
-    C2 = ConstantExpr::getSExt(C2, Type::getInt64Ty(Context));
+    C2 = ConstantExpr::getSExt(C2, Type::getInt64Ty(C1->getContext()));
 
   if (C1 == C2) return 0;  // They are equal
 
@@ -1262,8 +1491,7 @@ static int IdxCompare(LLVMContext &Context, Constant *C1, Constant *C2,
 /// To simplify this code we canonicalize the relation so that the first
 /// operand is always the most "complex" of the two.  We consider ConstantFP
 /// to be the simplest, and ConstantExprs to be the most complex.
-static FCmpInst::Predicate evaluateFCmpRelation(LLVMContext &Context,
-                                                Constant *V1, Constant *V2) {
+static FCmpInst::Predicate evaluateFCmpRelation(Constant *V1, Constant *V2) {
   assert(V1->getType() == V2->getType() &&
          "Cannot compare values of different types!");
 
@@ -1296,7 +1524,7 @@ static FCmpInst::Predicate evaluateFCmpRelation(LLVMContext &Context,
     }
 
     // If the first operand is simple and second is ConstantExpr, swap operands.
-    FCmpInst::Predicate SwappedRelation = evaluateFCmpRelation(Context, V2, V1);
+    FCmpInst::Predicate SwappedRelation = evaluateFCmpRelation(V2, V1);
     if (SwappedRelation != FCmpInst::BAD_FCMP_PREDICATE)
       return FCmpInst::getSwappedPredicate(SwappedRelation);
   } else {
@@ -1331,16 +1559,16 @@ static FCmpInst::Predicate evaluateFCmpRelation(LLVMContext &Context,
 /// constants (like ConstantInt) to be the simplest, followed by
 /// GlobalValues, followed by ConstantExpr's (the most complex).
 ///
-static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
-                                                Constant *V1, 
-                                                Constant *V2,
+static ICmpInst::Predicate evaluateICmpRelation(Constant *V1, Constant *V2,
                                                 bool isSigned) {
   assert(V1->getType() == V2->getType() &&
          "Cannot compare different types of values!");
   if (V1 == V2) return ICmpInst::ICMP_EQ;
 
-  if (!isa<ConstantExpr>(V1) && !isa<GlobalValue>(V1)) {
-    if (!isa<GlobalValue>(V2) && !isa<ConstantExpr>(V2)) {
+  if (!isa<ConstantExpr>(V1) && !isa<GlobalValue>(V1) &&
+      !isa<BlockAddress>(V1)) {
+    if (!isa<GlobalValue>(V2) && !isa<ConstantExpr>(V2) &&
+        !isa<BlockAddress>(V2)) {
       // We distilled this down to a simple case, use the standard constant
       // folder.
       ConstantInt *R = 0;
@@ -1363,36 +1591,63 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
 
     // If the first operand is simple, swap operands.
     ICmpInst::Predicate SwappedRelation = 
-      evaluateICmpRelation(Context, V2, V1, isSigned);
+      evaluateICmpRelation(V2, V1, isSigned);
     if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
       return ICmpInst::getSwappedPredicate(SwappedRelation);
 
-  } else if (const GlobalValue *CPR1 = dyn_cast<GlobalValue>(V1)) {
+  } else if (const GlobalValue *GV = dyn_cast<GlobalValue>(V1)) {
     if (isa<ConstantExpr>(V2)) {  // Swap as necessary.
       ICmpInst::Predicate SwappedRelation = 
-        evaluateICmpRelation(Context, V2, V1, isSigned);
+        evaluateICmpRelation(V2, V1, isSigned);
       if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
         return ICmpInst::getSwappedPredicate(SwappedRelation);
-      else
-        return ICmpInst::BAD_ICMP_PREDICATE;
+      return ICmpInst::BAD_ICMP_PREDICATE;
     }
 
-    // Now we know that the RHS is a GlobalValue or simple constant,
-    // which (since the types must match) means that it's a ConstantPointerNull.
-    if (const GlobalValue *CPR2 = dyn_cast<GlobalValue>(V2)) {
+    // Now we know that the RHS is a GlobalValue, BlockAddress or simple
+    // constant (which, since the types must match, means that it's a
+    // ConstantPointerNull).
+    if (const GlobalValue *GV2 = dyn_cast<GlobalValue>(V2)) {
       // Don't try to decide equality of aliases.
-      if (!isa<GlobalAlias>(CPR1) && !isa<GlobalAlias>(CPR2))
-        if (!CPR1->hasExternalWeakLinkage() || !CPR2->hasExternalWeakLinkage())
+      if (!isa<GlobalAlias>(GV) && !isa<GlobalAlias>(GV2))
+        if (!GV->hasExternalWeakLinkage() || !GV2->hasExternalWeakLinkage())
           return ICmpInst::ICMP_NE;
+    } else if (isa<BlockAddress>(V2)) {
+      return ICmpInst::ICMP_NE; // Globals never equal labels.
     } else {
       assert(isa<ConstantPointerNull>(V2) && "Canonicalization guarantee!");
-      // GlobalVals can never be null.  Don't try to evaluate aliases.
-      if (!CPR1->hasExternalWeakLinkage() && !isa<GlobalAlias>(CPR1))
+      // GlobalVals can never be null unless they have external weak linkage.
+      // We don't try to evaluate aliases here.
+      if (!GV->hasExternalWeakLinkage() && !isa<GlobalAlias>(GV))
         return ICmpInst::ICMP_NE;
     }
+  } else if (const BlockAddress *BA = dyn_cast<BlockAddress>(V1)) {
+    if (isa<ConstantExpr>(V2)) {  // Swap as necessary.
+      ICmpInst::Predicate SwappedRelation = 
+        evaluateICmpRelation(V2, V1, isSigned);
+      if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
+        return ICmpInst::getSwappedPredicate(SwappedRelation);
+      return ICmpInst::BAD_ICMP_PREDICATE;
+    }
+    
+    // Now we know that the RHS is a GlobalValue, BlockAddress or simple
+    // constant (which, since the types must match, means that it is a
+    // ConstantPointerNull).
+    if (const BlockAddress *BA2 = dyn_cast<BlockAddress>(V2)) {
+      // Block address in another function can't equal this one, but block
+      // addresses in the current function might be the same if blocks are
+      // empty.
+      if (BA2->getFunction() != BA->getFunction())
+        return ICmpInst::ICMP_NE;
+    } else {
+      // Block addresses aren't null, don't equal the address of globals.
+      assert((isa<ConstantPointerNull>(V2) || isa<GlobalValue>(V2)) &&
+             "Canonicalization guarantee!");
+      return ICmpInst::ICMP_NE;
+    }
   } else {
     // Ok, the LHS is known to be a constantexpr.  The RHS can be any of a
-    // constantexpr, a CPR, or a simple constant.
+    // constantexpr, a global, block address, or a simple constant.
     ConstantExpr *CE1 = cast<ConstantExpr>(V1);
     Constant *CE1Op0 = CE1->getOperand(0);
 
@@ -1415,7 +1670,7 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
           (isa<PointerType>(CE1->getType()) || CE1->getType()->isInteger())) {
         if (CE1->getOpcode() == Instruction::ZExt) isSigned = false;
         if (CE1->getOpcode() == Instruction::SExt) isSigned = true;
-        return evaluateICmpRelation(Context, CE1Op0,
+        return evaluateICmpRelation(CE1Op0,
                                     Constant::getNullValue(CE1Op0->getType()), 
                                     isSigned);
       }
@@ -1447,9 +1702,9 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
           return ICmpInst::ICMP_EQ;
         }
         // Otherwise, we can't really say if the first operand is null or not.
-      } else if (const GlobalValue *CPR2 = dyn_cast<GlobalValue>(V2)) {
+      } else if (const GlobalValue *GV2 = dyn_cast<GlobalValue>(V2)) {
         if (isa<ConstantPointerNull>(CE1Op0)) {
-          if (CPR2->hasExternalWeakLinkage())
+          if (GV2->hasExternalWeakLinkage())
             // Weak linkage GVals could be zero or not. We're comparing it to
             // a null pointer, so its less-or-equal
             return isSigned ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE;
@@ -1457,8 +1712,8 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
             // If its not weak linkage, the GVal must have a non-zero address
             // so the result is less-than
             return isSigned ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT;
-        } else if (const GlobalValue *CPR1 = dyn_cast<GlobalValue>(CE1Op0)) {
-          if (CPR1 == CPR2) {
+        } else if (const GlobalValue *GV = dyn_cast<GlobalValue>(CE1Op0)) {
+          if (GV == GV2) {
             // If this is a getelementptr of the same global, then it must be
             // different.  Because the types must match, the getelementptr could
             // only have at most one index, and because we fold getelementptr's
@@ -1504,7 +1759,7 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
             gep_type_iterator GTI = gep_type_begin(CE1);
             for (;i != CE1->getNumOperands() && i != CE2->getNumOperands();
                  ++i, ++GTI)
-              switch (IdxCompare(Context, CE1->getOperand(i),
+              switch (IdxCompare(CE1->getOperand(i),
                                  CE2->getOperand(i), GTI.getIndexedType())) {
               case -1: return isSigned ? ICmpInst::ICMP_SLT:ICmpInst::ICMP_ULT;
               case 1:  return isSigned ? ICmpInst::ICMP_SGT:ICmpInst::ICMP_UGT;
@@ -1540,14 +1795,14 @@ static ICmpInst::Predicate evaluateICmpRelation(LLVMContext &Context,
   return ICmpInst::BAD_ICMP_PREDICATE;
 }
 
-Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
-                                               unsigned short pred, 
+Constant *llvm::ConstantFoldCompareInstruction(unsigned short pred, 
                                                Constant *C1, Constant *C2) {
   const Type *ResultTy;
   if (const VectorType *VT = dyn_cast<VectorType>(C1->getType()))
-    ResultTy = VectorType::get(Type::getInt1Ty(Context), VT->getNumElements());
+    ResultTy = VectorType::get(Type::getInt1Ty(C1->getContext()),
+                               VT->getNumElements());
   else
-    ResultTy = Type::getInt1Ty(Context);
+    ResultTy = Type::getInt1Ty(C1->getContext());
 
   // Fold FCMP_FALSE/FCMP_TRUE unconditionally.
   if (pred == FCmpInst::FCMP_FALSE)
@@ -1570,9 +1825,9 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
       // Don't try to evaluate aliases.  External weak GV can be null.
       if (!isa<GlobalAlias>(GV) && !GV->hasExternalWeakLinkage()) {
         if (pred == ICmpInst::ICMP_EQ)
-          return ConstantInt::getFalse(Context);
+          return ConstantInt::getFalse(C1->getContext());
         else if (pred == ICmpInst::ICMP_NE)
-          return ConstantInt::getTrue(Context);
+          return ConstantInt::getTrue(C1->getContext());
       }
   // icmp eq/ne(GV,null) -> false/true
   } else if (C2->isNullValue()) {
@@ -1580,9 +1835,9 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
       // Don't try to evaluate aliases.  External weak GV can be null.
       if (!isa<GlobalAlias>(GV) && !GV->hasExternalWeakLinkage()) {
         if (pred == ICmpInst::ICMP_EQ)
-          return ConstantInt::getFalse(Context);
+          return ConstantInt::getFalse(C1->getContext());
         else if (pred == ICmpInst::ICMP_NE)
-          return ConstantInt::getTrue(Context);
+          return ConstantInt::getTrue(C1->getContext());
       }
   }
 
@@ -1605,26 +1860,16 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
     APInt V2 = cast<ConstantInt>(C2)->getValue();
     switch (pred) {
     default: llvm_unreachable("Invalid ICmp Predicate"); return 0;
-    case ICmpInst::ICMP_EQ:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1 == V2);
-    case ICmpInst::ICMP_NE: 
-      return ConstantInt::get(Type::getInt1Ty(Context), V1 != V2);
-    case ICmpInst::ICMP_SLT:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.slt(V2));
-    case ICmpInst::ICMP_SGT:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.sgt(V2));
-    case ICmpInst::ICMP_SLE:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.sle(V2));
-    case ICmpInst::ICMP_SGE:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.sge(V2));
-    case ICmpInst::ICMP_ULT:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.ult(V2));
-    case ICmpInst::ICMP_UGT:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.ugt(V2));
-    case ICmpInst::ICMP_ULE:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.ule(V2));
-    case ICmpInst::ICMP_UGE:
-      return ConstantInt::get(Type::getInt1Ty(Context), V1.uge(V2));
+    case ICmpInst::ICMP_EQ:  return ConstantInt::get(ResultTy, V1 == V2);
+    case ICmpInst::ICMP_NE:  return ConstantInt::get(ResultTy, V1 != V2);
+    case ICmpInst::ICMP_SLT: return ConstantInt::get(ResultTy, V1.slt(V2));
+    case ICmpInst::ICMP_SGT: return ConstantInt::get(ResultTy, V1.sgt(V2));
+    case ICmpInst::ICMP_SLE: return ConstantInt::get(ResultTy, V1.sle(V2));
+    case ICmpInst::ICMP_SGE: return ConstantInt::get(ResultTy, V1.sge(V2));
+    case ICmpInst::ICMP_ULT: return ConstantInt::get(ResultTy, V1.ult(V2));
+    case ICmpInst::ICMP_UGT: return ConstantInt::get(ResultTy, V1.ugt(V2));
+    case ICmpInst::ICMP_ULE: return ConstantInt::get(ResultTy, V1.ule(V2));
+    case ICmpInst::ICMP_UGE: return ConstantInt::get(ResultTy, V1.uge(V2));
     }
   } else if (isa<ConstantFP>(C1) && isa<ConstantFP>(C2)) {
     APFloat C1V = cast<ConstantFP>(C1)->getValueAPF();
@@ -1632,47 +1877,47 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
     APFloat::cmpResult R = C1V.compare(C2V);
     switch (pred) {
     default: llvm_unreachable("Invalid FCmp Predicate"); return 0;
-    case FCmpInst::FCMP_FALSE: return ConstantInt::getFalse(Context);
-    case FCmpInst::FCMP_TRUE:  return ConstantInt::getTrue(Context);
+    case FCmpInst::FCMP_FALSE: return Constant::getNullValue(ResultTy);
+    case FCmpInst::FCMP_TRUE:  return Constant::getAllOnesValue(ResultTy);
     case FCmpInst::FCMP_UNO:
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpUnordered);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpUnordered);
     case FCmpInst::FCMP_ORD:
-      return ConstantInt::get(Type::getInt1Ty(Context), R!=APFloat::cmpUnordered);
+      return ConstantInt::get(ResultTy, R!=APFloat::cmpUnordered);
     case FCmpInst::FCMP_UEQ:
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpUnordered ||
-                                            R==APFloat::cmpEqual);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpUnordered ||
+                                        R==APFloat::cmpEqual);
     case FCmpInst::FCMP_OEQ:   
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpEqual);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpEqual);
     case FCmpInst::FCMP_UNE:
-      return ConstantInt::get(Type::getInt1Ty(Context), R!=APFloat::cmpEqual);
+      return ConstantInt::get(ResultTy, R!=APFloat::cmpEqual);
     case FCmpInst::FCMP_ONE:   
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpLessThan ||
-                                            R==APFloat::cmpGreaterThan);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpLessThan ||
+                                        R==APFloat::cmpGreaterThan);
     case FCmpInst::FCMP_ULT: 
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpUnordered ||
-                                            R==APFloat::cmpLessThan);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpUnordered ||
+                                        R==APFloat::cmpLessThan);
     case FCmpInst::FCMP_OLT:   
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpLessThan);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpLessThan);
     case FCmpInst::FCMP_UGT:
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpUnordered ||
-                                            R==APFloat::cmpGreaterThan);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpUnordered ||
+                                        R==APFloat::cmpGreaterThan);
     case FCmpInst::FCMP_OGT:
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpGreaterThan);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpGreaterThan);
     case FCmpInst::FCMP_ULE:
-      return ConstantInt::get(Type::getInt1Ty(Context), R!=APFloat::cmpGreaterThan);
+      return ConstantInt::get(ResultTy, R!=APFloat::cmpGreaterThan);
     case FCmpInst::FCMP_OLE: 
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpLessThan ||
-                                            R==APFloat::cmpEqual);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpLessThan ||
+                                        R==APFloat::cmpEqual);
     case FCmpInst::FCMP_UGE:
-      return ConstantInt::get(Type::getInt1Ty(Context), R!=APFloat::cmpLessThan);
+      return ConstantInt::get(ResultTy, R!=APFloat::cmpLessThan);
     case FCmpInst::FCMP_OGE: 
-      return ConstantInt::get(Type::getInt1Ty(Context), R==APFloat::cmpGreaterThan ||
-                                            R==APFloat::cmpEqual);
+      return ConstantInt::get(ResultTy, R==APFloat::cmpGreaterThan ||
+                                        R==APFloat::cmpEqual);
     }
   } else if (isa<VectorType>(C1->getType())) {
     SmallVector<Constant*, 16> C1Elts, C2Elts;
-    C1->getVectorElements(Context, C1Elts);
-    C2->getVectorElements(Context, C2Elts);
+    C1->getVectorElements(C1Elts);
+    C2->getVectorElements(C2Elts);
     if (C1Elts.empty() || C2Elts.empty())
       return 0;
 
@@ -1688,7 +1933,7 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
 
   if (C1->getType()->isFloatingPoint()) {
     int Result = -1;  // -1 = unknown, 0 = known false, 1 = known true.
-    switch (evaluateFCmpRelation(Context, C1, C2)) {
+    switch (evaluateFCmpRelation(C1, C2)) {
     default: llvm_unreachable("Unknown relation!");
     case FCmpInst::FCMP_UNO:
     case FCmpInst::FCMP_ORD:
@@ -1742,12 +1987,12 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
 
     // If we evaluated the result, return it now.
     if (Result != -1)
-      return ConstantInt::get(Type::getInt1Ty(Context), Result);
+      return ConstantInt::get(ResultTy, Result);
 
   } else {
     // Evaluate the relation between the two constants, per the predicate.
     int Result = -1;  // -1 = unknown, 0 = known false, 1 = known true.
-    switch (evaluateICmpRelation(Context, C1, C2, CmpInst::isSigned(pred))) {
+    switch (evaluateICmpRelation(C1, C2, CmpInst::isSigned(pred))) {
     default: llvm_unreachable("Unknown relational!");
     case ICmpInst::BAD_ICMP_PREDICATE:
       break;  // Couldn't determine anything about these constants.
@@ -1812,13 +2057,15 @@ Constant *llvm::ConstantFoldCompareInstruction(LLVMContext &Context,
 
     // If we evaluated the result, return it now.
     if (Result != -1)
-      return ConstantInt::get(Type::getInt1Ty(Context), Result);
+      return ConstantInt::get(ResultTy, Result);
 
     // If the right hand side is a bitcast, try using its inverse to simplify
-    // it by moving it to the left hand side.
+    // it by moving it to the left hand side.  We can't do this if it would turn
+    // a vector compare into a scalar compare or visa versa.
     if (ConstantExpr *CE2 = dyn_cast<ConstantExpr>(C2)) {
-      if (CE2->getOpcode() == Instruction::BitCast) {
-        Constant *CE2Op0 = CE2->getOperand(0);
+      Constant *CE2Op0 = CE2->getOperand(0);
+      if (CE2->getOpcode() == Instruction::BitCast &&
+          isa<VectorType>(CE2->getType())==isa<VectorType>(CE2Op0->getType())) {
         Constant *Inverse = ConstantExpr::getBitCast(C1, CE2Op0->getType());
         return ConstantExpr::getICmp(pred, Inverse, CE2Op0);
       }
@@ -1890,8 +2137,7 @@ static bool isInBoundsIndices(Constant *const *Idxs, size_t NumIdx) {
   return true;
 }
 
-Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context, 
-                                          Constant *C,
+Constant *llvm::ConstantFoldGetElementPtr(Constant *C,
                                           bool inBounds,
                                           Constant* const *Idxs,
                                           unsigned NumIdx) {
@@ -1951,10 +2197,9 @@ Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context,
         if (!Idx0->isNullValue()) {
           const Type *IdxTy = Combined->getType();
           if (IdxTy != Idx0->getType()) {
-            Constant *C1 =
-              ConstantExpr::getSExtOrBitCast(Idx0, Type::getInt64Ty(Context));
-            Constant *C2 = ConstantExpr::getSExtOrBitCast(Combined, 
-                                                          Type::getInt64Ty(Context));
+            const Type *Int64Ty = Type::getInt64Ty(IdxTy->getContext());
+            Constant *C1 = ConstantExpr::getSExtOrBitCast(Idx0, Int64Ty);
+            Constant *C2 = ConstantExpr::getSExtOrBitCast(Combined, Int64Ty);
             Combined = ConstantExpr::get(Instruction::Add, C1, C2);
           } else {
             Combined =
@@ -1975,7 +2220,7 @@ Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context,
     }
 
     // Implement folding of:
-    //    int* getelementptr ([2 x int]* cast ([3 x int]* %X to [2 x int]*),
+    //    int* getelementptr ([2 x int]* bitcast ([3 x int]* %X to [2 x int]*),
     //                        long 0, long 0)
     // To: int* getelementptr ([3 x int]* %X, long 0, long 0)
     //
@@ -1992,28 +2237,6 @@ Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context,
                 ConstantExpr::getGetElementPtr(
                       (Constant*)CE->getOperand(0), Idxs, NumIdx);
     }
-
-    // Fold: getelementptr (i8* inttoptr (i64 1 to i8*), i32 -1)
-    // Into: inttoptr (i64 0 to i8*)
-    // This happens with pointers to member functions in C++.
-    if (CE->getOpcode() == Instruction::IntToPtr && NumIdx == 1 &&
-        isa<ConstantInt>(CE->getOperand(0)) && isa<ConstantInt>(Idxs[0]) &&
-        cast<PointerType>(CE->getType())->getElementType() ==
-            Type::getInt8Ty(Context)) {
-      Constant *Base = CE->getOperand(0);
-      Constant *Offset = Idxs[0];
-
-      // Convert the smaller integer to the larger type.
-      if (Offset->getType()->getPrimitiveSizeInBits() < 
-          Base->getType()->getPrimitiveSizeInBits())
-        Offset = ConstantExpr::getSExt(Offset, Base->getType());
-      else if (Base->getType()->getPrimitiveSizeInBits() <
-               Offset->getType()->getPrimitiveSizeInBits())
-        Base = ConstantExpr::getZExt(Base, Offset->getType());
-
-      Base = ConstantExpr::getAdd(Base, Offset);
-      return ConstantExpr::getIntToPtr(Base, CE->getType());
-    }
   }
 
   // Check to see if any array indices are not within the corresponding
@@ -2045,10 +2268,10 @@ Constant *llvm::ConstantFoldGetElementPtr(LLVMContext &Context,
             // overflow trouble.
             if (!PrevIdx->getType()->isInteger(64))
               PrevIdx = ConstantExpr::getSExt(PrevIdx,
-                                              Type::getInt64Ty(Context));
+                                           Type::getInt64Ty(Div->getContext()));
             if (!Div->getType()->isInteger(64))
               Div = ConstantExpr::getSExt(Div,
-                                          Type::getInt64Ty(Context));
+                                          Type::getInt64Ty(Div->getContext()));
 
             NewIdxs[i-1] = ConstantExpr::getAdd(PrevIdx, Div);
           } else {
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantFold.h b/libclamav/c++/llvm/lib/VMCore/ConstantFold.h
index cc97001..d2dbbdd 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantFold.h
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantFold.h
@@ -23,46 +23,31 @@ namespace llvm {
   class Value;
   class Constant;
   class Type;
-  class LLVMContext;
 
   // Constant fold various types of instruction...
   Constant *ConstantFoldCastInstruction(
-    LLVMContext &Context,
     unsigned opcode,     ///< The opcode of the cast
     Constant *V,         ///< The source constant
     const Type *DestTy   ///< The destination type
   );
-  Constant *ConstantFoldSelectInstruction(LLVMContext &Context,
-                                          Constant *Cond,
+  Constant *ConstantFoldSelectInstruction(Constant *Cond,
                                           Constant *V1, Constant *V2);
-  Constant *ConstantFoldExtractElementInstruction(LLVMContext &Context,
-                                                  Constant *Val,
-                                                  Constant *Idx);
-  Constant *ConstantFoldInsertElementInstruction(LLVMContext &Context,
-                                                 Constant *Val,
-                                                 Constant *Elt,
+  Constant *ConstantFoldExtractElementInstruction(Constant *Val, Constant *Idx);
+  Constant *ConstantFoldInsertElementInstruction(Constant *Val, Constant *Elt,
                                                  Constant *Idx);
-  Constant *ConstantFoldShuffleVectorInstruction(LLVMContext &Context,
-                                                 Constant *V1,
-                                                 Constant *V2,
+  Constant *ConstantFoldShuffleVectorInstruction(Constant *V1, Constant *V2,
                                                  Constant *Mask);
-  Constant *ConstantFoldExtractValueInstruction(LLVMContext &Context,
-                                                Constant *Agg,
+  Constant *ConstantFoldExtractValueInstruction(Constant *Agg,
                                                 const unsigned *Idxs,
                                                 unsigned NumIdx);
-  Constant *ConstantFoldInsertValueInstruction(LLVMContext &Context,
-                                               Constant *Agg,
-                                               Constant *Val,
+  Constant *ConstantFoldInsertValueInstruction(Constant *Agg, Constant *Val,
                                                const unsigned *Idxs,
                                                unsigned NumIdx);
-  Constant *ConstantFoldBinaryInstruction(LLVMContext &Context,
-                                          unsigned Opcode, Constant *V1,
+  Constant *ConstantFoldBinaryInstruction(unsigned Opcode, Constant *V1,
                                           Constant *V2);
-  Constant *ConstantFoldCompareInstruction(LLVMContext &Context,
-                                           unsigned short predicate, 
+  Constant *ConstantFoldCompareInstruction(unsigned short predicate, 
                                            Constant *C1, Constant *C2);
-  Constant *ConstantFoldGetElementPtr(LLVMContext &Context, Constant *C,
-                                      bool inBounds,
+  Constant *ConstantFoldGetElementPtr(Constant *C, bool inBounds,
                                       Constant* const *Idxs, unsigned NumIdx);
 } // End llvm namespace
 
diff --git a/libclamav/c++/llvm/lib/VMCore/Constants.cpp b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
index 916aac6..8cc6e94 100644
--- a/libclamav/c++/llvm/lib/VMCore/Constants.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Constants.cpp
@@ -228,8 +228,7 @@ Constant::PossibleRelocationsTy Constant::getRelocationInfo() const {
 /// type, returns the elements of the vector in the specified smallvector.
 /// This handles breaking down a vector undef into undef elements, etc.  For
 /// constant exprs and other cases we can't handle, we return an empty vector.
-void Constant::getVectorElements(LLVMContext &Context,
-                                 SmallVectorImpl<Constant*> &Elts) const {
+void Constant::getVectorElements(SmallVectorImpl<Constant*> &Elts) const {
   assert(isa<VectorType>(getType()) && "Not a vector constant!");
   
   if (const ConstantVector *CV = dyn_cast<ConstantVector>(this)) {
@@ -586,6 +585,27 @@ Constant* ConstantStruct::get(LLVMContext &Context,
   return get(Context, std::vector<Constant*>(Vals, Vals+NumVals), Packed);
 }
 
+ConstantUnion::ConstantUnion(const UnionType *T, Constant* V)
+  : Constant(T, ConstantUnionVal,
+             OperandTraits<ConstantUnion>::op_end(this) - 1, 1) {
+  Use *OL = OperandList;
+  assert(T->getElementTypeIndex(V->getType()) >= 0 &&
+      "Initializer for union element isn't a member of union type!");
+  *OL = V;
+}
+
+// ConstantUnion accessors.
+Constant* ConstantUnion::get(const UnionType* T, Constant* V) {
+  LLVMContextImpl* pImpl = T->getContext().pImpl;
+  
+  // Create a ConstantAggregateZero value if all elements are zeros...
+  if (!V->isNullValue())
+    return pImpl->UnionConstants.getOrCreate(T, V);
+
+  return ConstantAggregateZero::get(T);
+}
+
+
 ConstantVector::ConstantVector(const VectorType *T,
                                const std::vector<Constant*> &V)
   : Constant(T, ConstantVectorVal,
@@ -646,21 +666,42 @@ Constant* ConstantExpr::getNSWNeg(Constant* C) {
   return getNSWSub(ConstantFP::getZeroValueForNegation(C->getType()), C);
 }
 
+Constant* ConstantExpr::getNUWNeg(Constant* C) {
+  assert(C->getType()->isIntOrIntVector() &&
+         "Cannot NEG a nonintegral value!");
+  return getNUWSub(ConstantFP::getZeroValueForNegation(C->getType()), C);
+}
+
 Constant* ConstantExpr::getNSWAdd(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::Add, C1, C2,
                OverflowingBinaryOperator::NoSignedWrap);
 }
 
+Constant* ConstantExpr::getNUWAdd(Constant* C1, Constant* C2) {
+  return getTy(C1->getType(), Instruction::Add, C1, C2,
+               OverflowingBinaryOperator::NoUnsignedWrap);
+}
+
 Constant* ConstantExpr::getNSWSub(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::Sub, C1, C2,
                OverflowingBinaryOperator::NoSignedWrap);
 }
 
+Constant* ConstantExpr::getNUWSub(Constant* C1, Constant* C2) {
+  return getTy(C1->getType(), Instruction::Sub, C1, C2,
+               OverflowingBinaryOperator::NoUnsignedWrap);
+}
+
 Constant* ConstantExpr::getNSWMul(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::Mul, C1, C2,
                OverflowingBinaryOperator::NoSignedWrap);
 }
 
+Constant* ConstantExpr::getNUWMul(Constant* C1, Constant* C2) {
+  return getTy(C1->getType(), Instruction::Mul, C1, C2,
+               OverflowingBinaryOperator::NoUnsignedWrap);
+}
+
 Constant* ConstantExpr::getExactSDiv(Constant* C1, Constant* C2) {
   return getTy(C1->getType(), Instruction::SDiv, C1, C2,
                SDivOperator::IsExact);
@@ -990,6 +1031,13 @@ void ConstantStruct::destroyConstant() {
 
 // destroyConstant - Remove the constant from the constant table...
 //
+void ConstantUnion::destroyConstant() {
+  getType()->getContext().pImpl->UnionConstants.remove(this);
+  destroyConstantImpl();
+}
+
+// destroyConstant - Remove the constant from the constant table...
+//
 void ConstantVector::destroyConstant() {
   getType()->getContext().pImpl->VectorConstants.remove(this);
   destroyConstantImpl();
@@ -1134,7 +1182,7 @@ static inline Constant *getFoldedCast(
   Instruction::CastOps opc, Constant *C, const Type *Ty) {
   assert(Ty->isFirstClassType() && "Cannot cast to an aggregate type!");
   // Fold a few common cases
-  if (Constant *FC = ConstantFoldCastInstruction(Ty->getContext(), opc, C, Ty))
+  if (Constant *FC = ConstantFoldCastInstruction(opc, C, Ty))
     return FC;
 
   LLVMContextImpl *pImpl = Ty->getContext().pImpl;
@@ -1150,24 +1198,24 @@ Constant *ConstantExpr::getCast(unsigned oc, Constant *C, const Type *Ty) {
   Instruction::CastOps opc = Instruction::CastOps(oc);
   assert(Instruction::isCast(opc) && "opcode out of range");
   assert(C && Ty && "Null arguments to getCast");
-  assert(Ty->isFirstClassType() && "Cannot cast to an aggregate type!");
+  assert(CastInst::castIsValid(opc, C, Ty) && "Invalid constantexpr cast!");
 
   switch (opc) {
-    default:
-      llvm_unreachable("Invalid cast opcode");
-      break;
-    case Instruction::Trunc:    return getTrunc(C, Ty);
-    case Instruction::ZExt:     return getZExt(C, Ty);
-    case Instruction::SExt:     return getSExt(C, Ty);
-    case Instruction::FPTrunc:  return getFPTrunc(C, Ty);
-    case Instruction::FPExt:    return getFPExtend(C, Ty);
-    case Instruction::UIToFP:   return getUIToFP(C, Ty);
-    case Instruction::SIToFP:   return getSIToFP(C, Ty);
-    case Instruction::FPToUI:   return getFPToUI(C, Ty);
-    case Instruction::FPToSI:   return getFPToSI(C, Ty);
-    case Instruction::PtrToInt: return getPtrToInt(C, Ty);
-    case Instruction::IntToPtr: return getIntToPtr(C, Ty);
-    case Instruction::BitCast:  return getBitCast(C, Ty);
+  default:
+    llvm_unreachable("Invalid cast opcode");
+    break;
+  case Instruction::Trunc:    return getTrunc(C, Ty);
+  case Instruction::ZExt:     return getZExt(C, Ty);
+  case Instruction::SExt:     return getSExt(C, Ty);
+  case Instruction::FPTrunc:  return getFPTrunc(C, Ty);
+  case Instruction::FPExt:    return getFPExtend(C, Ty);
+  case Instruction::UIToFP:   return getUIToFP(C, Ty);
+  case Instruction::SIToFP:   return getSIToFP(C, Ty);
+  case Instruction::FPToUI:   return getFPToUI(C, Ty);
+  case Instruction::FPToSI:   return getFPToSI(C, Ty);
+  case Instruction::PtrToInt: return getPtrToInt(C, Ty);
+  case Instruction::IntToPtr: return getIntToPtr(C, Ty);
+  case Instruction::BitCast:  return getBitCast(C, Ty);
   }
   return 0;
 } 
@@ -1347,20 +1395,8 @@ Constant *ConstantExpr::getIntToPtr(Constant *C, const Type *DstTy) {
 }
 
 Constant *ConstantExpr::getBitCast(Constant *C, const Type *DstTy) {
-  // BitCast implies a no-op cast of type only. No bits change.  However, you 
-  // can't cast pointers to anything but pointers.
-#ifndef NDEBUG
-  const Type *SrcTy = C->getType();
-  assert((isa<PointerType>(SrcTy) == isa<PointerType>(DstTy)) &&
-         "BitCast cannot cast pointer to non-pointer and vice versa");
-
-  // Now we know we're not dealing with mismatched pointer casts (ptr->nonptr
-  // or nonptr->ptr). For all the other types, the cast is okay if source and 
-  // destination bit widths are identical.
-  unsigned SrcBitSize = SrcTy->getPrimitiveSizeInBits();
-  unsigned DstBitSize = DstTy->getPrimitiveSizeInBits();
-#endif
-  assert(SrcBitSize == DstBitSize && "BitCast requires types of same width");
+  assert(CastInst::castIsValid(Instruction::BitCast, C, DstTy) &&
+         "Invalid constantexpr bitcast!");
   
   // It is common to ask for a bitcast of a value to its own type, handle this
   // speedily.
@@ -1380,8 +1416,7 @@ Constant *ConstantExpr::getTy(const Type *ReqTy, unsigned Opcode,
          "Operand types in binary constant expression should match");
 
   if (ReqTy == C1->getType() || ReqTy == Type::getInt1Ty(ReqTy->getContext()))
-    if (Constant *FC = ConstantFoldBinaryInstruction(ReqTy->getContext(),
-                                                     Opcode, C1, C2))
+    if (Constant *FC = ConstantFoldBinaryInstruction(Opcode, C1, C2))
       return FC;          // Fold a few common cases...
 
   std::vector<Constant*> argVec(1, C1); argVec.push_back(C2);
@@ -1491,30 +1526,35 @@ Constant* ConstantExpr::getSizeOf(const Type* Ty) {
 }
 
 Constant* ConstantExpr::getAlignOf(const Type* Ty) {
-  // alignof is implemented as: (i64) gep ({i8,Ty}*)null, 0, 1
+  // alignof is implemented as: (i64) gep ({i1,Ty}*)null, 0, 1
   // Note that a non-inbounds gep is used, as null isn't within any object.
   const Type *AligningTy = StructType::get(Ty->getContext(),
-                                   Type::getInt8Ty(Ty->getContext()), Ty, NULL);
+                                   Type::getInt1Ty(Ty->getContext()), Ty, NULL);
   Constant *NullPtr = Constant::getNullValue(AligningTy->getPointerTo());
-  Constant *Zero = ConstantInt::get(Type::getInt32Ty(Ty->getContext()), 0);
+  Constant *Zero = ConstantInt::get(Type::getInt64Ty(Ty->getContext()), 0);
   Constant *One = ConstantInt::get(Type::getInt32Ty(Ty->getContext()), 1);
   Constant *Indices[2] = { Zero, One };
   Constant *GEP = getGetElementPtr(NullPtr, Indices, 2);
   return getCast(Instruction::PtrToInt, GEP,
-                 Type::getInt32Ty(Ty->getContext()));
+                 Type::getInt64Ty(Ty->getContext()));
 }
 
 Constant* ConstantExpr::getOffsetOf(const StructType* STy, unsigned FieldNo) {
+  return getOffsetOf(STy, ConstantInt::get(Type::getInt32Ty(STy->getContext()),
+                                           FieldNo));
+}
+
+Constant* ConstantExpr::getOffsetOf(const Type* Ty, Constant *FieldNo) {
   // offsetof is implemented as: (i64) gep (Ty*)null, 0, FieldNo
   // Note that a non-inbounds gep is used, as null isn't within any object.
   Constant *GEPIdx[] = {
-    ConstantInt::get(Type::getInt64Ty(STy->getContext()), 0),
-    ConstantInt::get(Type::getInt32Ty(STy->getContext()), FieldNo)
+    ConstantInt::get(Type::getInt64Ty(Ty->getContext()), 0),
+    FieldNo
   };
   Constant *GEP = getGetElementPtr(
-                Constant::getNullValue(PointerType::getUnqual(STy)), GEPIdx, 2);
+                Constant::getNullValue(PointerType::getUnqual(Ty)), GEPIdx, 2);
   return getCast(Instruction::PtrToInt, GEP,
-                 Type::getInt64Ty(STy->getContext()));
+                 Type::getInt64Ty(Ty->getContext()));
 }
 
 Constant *ConstantExpr::getCompare(unsigned short pred, 
@@ -1528,8 +1568,7 @@ Constant *ConstantExpr::getSelectTy(const Type *ReqTy, Constant *C,
   assert(!SelectInst::areInvalidOperands(C, V1, V2)&&"Invalid select operands");
 
   if (ReqTy == V1->getType())
-    if (Constant *SC = ConstantFoldSelectInstruction(
-                                                ReqTy->getContext(), C, V1, V2))
+    if (Constant *SC = ConstantFoldSelectInstruction(C, V1, V2))
       return SC;        // Fold common cases
 
   std::vector<Constant*> argVec(3, C);
@@ -1549,9 +1588,8 @@ Constant *ConstantExpr::getGetElementPtrTy(const Type *ReqTy, Constant *C,
          cast<PointerType>(ReqTy)->getElementType() &&
          "GEP indices invalid!");
 
-  if (Constant *FC = ConstantFoldGetElementPtr(
-                              ReqTy->getContext(), C, /*inBounds=*/false,
-                              (Constant**)Idxs, NumIdx))
+  if (Constant *FC = ConstantFoldGetElementPtr(C, /*inBounds=*/false,
+                                               (Constant**)Idxs, NumIdx))
     return FC;          // Fold a few common cases...
 
   assert(isa<PointerType>(C->getType()) &&
@@ -1577,9 +1615,8 @@ Constant *ConstantExpr::getInBoundsGetElementPtrTy(const Type *ReqTy,
          cast<PointerType>(ReqTy)->getElementType() &&
          "GEP indices invalid!");
 
-  if (Constant *FC = ConstantFoldGetElementPtr(
-                              ReqTy->getContext(), C, /*inBounds=*/true,
-                              (Constant**)Idxs, NumIdx))
+  if (Constant *FC = ConstantFoldGetElementPtr(C, /*inBounds=*/true,
+                                               (Constant**)Idxs, NumIdx))
     return FC;          // Fold a few common cases...
 
   assert(isa<PointerType>(C->getType()) &&
@@ -1635,8 +1672,7 @@ ConstantExpr::getICmp(unsigned short pred, Constant *LHS, Constant *RHS) {
   assert(pred >= ICmpInst::FIRST_ICMP_PREDICATE && 
          pred <= ICmpInst::LAST_ICMP_PREDICATE && "Invalid ICmp Predicate");
 
-  if (Constant *FC = ConstantFoldCompareInstruction(
-                                             LHS->getContext(), pred, LHS, RHS))
+  if (Constant *FC = ConstantFoldCompareInstruction(pred, LHS, RHS))
     return FC;          // Fold a few common cases...
 
   // Look up the constant in the table first to ensure uniqueness
@@ -1659,8 +1695,7 @@ ConstantExpr::getFCmp(unsigned short pred, Constant *LHS, Constant *RHS) {
   assert(LHS->getType() == RHS->getType());
   assert(pred <= FCmpInst::LAST_FCMP_PREDICATE && "Invalid FCmp Predicate");
 
-  if (Constant *FC = ConstantFoldCompareInstruction(
-                                            LHS->getContext(), pred, LHS, RHS))
+  if (Constant *FC = ConstantFoldCompareInstruction(pred, LHS, RHS))
     return FC;          // Fold a few common cases...
 
   // Look up the constant in the table first to ensure uniqueness
@@ -1680,8 +1715,7 @@ ConstantExpr::getFCmp(unsigned short pred, Constant *LHS, Constant *RHS) {
 
 Constant *ConstantExpr::getExtractElementTy(const Type *ReqTy, Constant *Val,
                                             Constant *Idx) {
-  if (Constant *FC = ConstantFoldExtractElementInstruction(
-                                                ReqTy->getContext(), Val, Idx))
+  if (Constant *FC = ConstantFoldExtractElementInstruction(Val, Idx))
     return FC;          // Fold a few common cases.
   // Look up the constant in the table first to ensure uniqueness
   std::vector<Constant*> ArgVec(1, Val);
@@ -1703,8 +1737,7 @@ Constant *ConstantExpr::getExtractElement(Constant *Val, Constant *Idx) {
 
 Constant *ConstantExpr::getInsertElementTy(const Type *ReqTy, Constant *Val,
                                            Constant *Elt, Constant *Idx) {
-  if (Constant *FC = ConstantFoldInsertElementInstruction(
-                                            ReqTy->getContext(), Val, Elt, Idx))
+  if (Constant *FC = ConstantFoldInsertElementInstruction(Val, Elt, Idx))
     return FC;          // Fold a few common cases.
   // Look up the constant in the table first to ensure uniqueness
   std::vector<Constant*> ArgVec(1, Val);
@@ -1729,8 +1762,7 @@ Constant *ConstantExpr::getInsertElement(Constant *Val, Constant *Elt,
 
 Constant *ConstantExpr::getShuffleVectorTy(const Type *ReqTy, Constant *V1,
                                            Constant *V2, Constant *Mask) {
-  if (Constant *FC = ConstantFoldShuffleVectorInstruction(
-                                            ReqTy->getContext(), V1, V2, Mask))
+  if (Constant *FC = ConstantFoldShuffleVectorInstruction(V1, V2, Mask))
     return FC;          // Fold a few common cases...
   // Look up the constant in the table first to ensure uniqueness
   std::vector<Constant*> ArgVec(1, V1);
@@ -1763,8 +1795,7 @@ Constant *ConstantExpr::getInsertValueTy(const Type *ReqTy, Constant *Agg,
          "insertvalue type invalid!");
   assert(Agg->getType()->isFirstClassType() &&
          "Non-first-class type for constant InsertValue expression");
-  Constant *FC = ConstantFoldInsertValueInstruction(
-                                  ReqTy->getContext(), Agg, Val, Idxs, NumIdx);
+  Constant *FC = ConstantFoldInsertValueInstruction(Agg, Val, Idxs, NumIdx);
   assert(FC && "InsertValue constant expr couldn't be folded!");
   return FC;
 }
@@ -1790,8 +1821,7 @@ Constant *ConstantExpr::getExtractValueTy(const Type *ReqTy, Constant *Agg,
          "extractvalue indices invalid!");
   assert(Agg->getType()->isFirstClassType() &&
          "Non-first-class type for constant extractvalue expression");
-  Constant *FC = ConstantFoldExtractValueInstruction(
-                                        ReqTy->getContext(), Agg, Idxs, NumIdx);
+  Constant *FC = ConstantFoldExtractValueInstruction(Agg, Idxs, NumIdx);
   assert(FC && "ExtractValue constant expr couldn't be folded!");
   return FC;
 }
@@ -2081,6 +2111,11 @@ void ConstantStruct::replaceUsesOfWithOnConstant(Value *From, Value *To,
   destroyConstant();
 }
 
+void ConstantUnion::replaceUsesOfWithOnConstant(Value *From, Value *To,
+                                                 Use *U) {
+  assert(false && "Implement replaceUsesOfWithOnConstant for unions");
+}
+
 void ConstantVector::replaceUsesOfWithOnConstant(Value *From, Value *To,
                                                  Use *U) {
   assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
diff --git a/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h b/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
index 08224e4..c798ba2 100644
--- a/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
+++ b/libclamav/c++/llvm/lib/VMCore/ConstantsContext.h
@@ -341,6 +341,13 @@ struct ConstantTraits< std::vector<T, Alloc> > {
   }
 };
 
+template<>
+struct ConstantTraits<Constant *> {
+  static unsigned uses(Constant * const & v) {
+    return 1;
+  }
+};
+
 template<class ConstantClass, class TypeClass, class ValType>
 struct ConstantCreator {
   static ConstantClass *create(const TypeClass *Ty, const ValType &V) {
@@ -470,6 +477,14 @@ struct ConstantKeyData<ConstantStruct> {
   }
 };
 
+template<>
+struct ConstantKeyData<ConstantUnion> {
+  typedef Constant* ValType;
+  static ValType getValType(ConstantUnion *CU) {
+    return cast<Constant>(CU->getOperand(0));
+  }
+};
+
 // ConstantPointerNull does not take extra "value" argument...
 template<class ValType>
 struct ConstantCreator<ConstantPointerNull, PointerType, ValType> {
diff --git a/libclamav/c++/llvm/lib/VMCore/Core.cpp b/libclamav/c++/llvm/lib/VMCore/Core.cpp
index 984d245..a044fc5 100644
--- a/libclamav/c++/llvm/lib/VMCore/Core.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Core.cpp
@@ -20,12 +20,13 @@
 #include "llvm/GlobalAlias.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/TypeSymbolTable.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/InlineAsm.h"
 #include "llvm/IntrinsicInst.h"
-#include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/CallSite.h"
+#include "llvm/Support/Debug.h"
 #include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/MemoryBuffer.h"
+#include "llvm/Support/raw_ostream.h"
 #include <cassert>
 #include <cstdlib>
 #include <cstring>
@@ -140,6 +141,8 @@ LLVMTypeKind LLVMGetTypeKind(LLVMTypeRef Ty) {
     return LLVMFunctionTypeKind;
   case Type::StructTyID:
     return LLVMStructTypeKind;
+  case Type::UnionTyID:
+    return LLVMUnionTypeKind;
   case Type::ArrayTyID:
     return LLVMArrayTypeKind;
   case Type::PointerTyID:
@@ -298,6 +301,35 @@ LLVMBool LLVMIsPackedStruct(LLVMTypeRef StructTy) {
   return unwrap<StructType>(StructTy)->isPacked();
 }
 
+/*--.. Operations on union types ..........................................--*/
+
+LLVMTypeRef LLVMUnionTypeInContext(LLVMContextRef C, LLVMTypeRef *ElementTypes,
+                                   unsigned ElementCount) {
+  SmallVector<const Type*, 8> Tys;
+  for (LLVMTypeRef *I = ElementTypes,
+                   *E = ElementTypes + ElementCount; I != E; ++I)
+    Tys.push_back(unwrap(*I));
+  
+  return wrap(UnionType::get(&Tys[0], Tys.size()));
+}
+
+LLVMTypeRef LLVMUnionType(LLVMTypeRef *ElementTypes,
+                           unsigned ElementCount, int Packed) {
+  return LLVMUnionTypeInContext(LLVMGetGlobalContext(), ElementTypes,
+                                ElementCount);
+}
+
+unsigned LLVMCountUnionElementTypes(LLVMTypeRef UnionTy) {
+  return unwrap<UnionType>(UnionTy)->getNumElements();
+}
+
+void LLVMGetUnionElementTypes(LLVMTypeRef UnionTy, LLVMTypeRef *Dest) {
+  UnionType *Ty = unwrap<UnionType>(UnionTy);
+  for (FunctionType::param_iterator I = Ty->element_begin(),
+                                    E = Ty->element_end(); I != E; ++I)
+    *Dest++ = wrap(*I);
+}
+
 /*--.. Operations on array, pointer, and vector types (sequence types) .....--*/
 
 LLVMTypeRef LLVMArrayType(LLVMTypeRef ElementType, unsigned ElementCount) {
@@ -932,8 +964,6 @@ LLVMLinkage LLVMGetLinkage(LLVMValueRef Global) {
     return LLVMDLLExportLinkage;
   case GlobalValue::ExternalWeakLinkage:
     return LLVMExternalWeakLinkage;
-  case GlobalValue::GhostLinkage:
-    return LLVMGhostLinkage;
   case GlobalValue::CommonLinkage:
     return LLVMCommonLinkage;
   }
@@ -988,7 +1018,8 @@ void LLVMSetLinkage(LLVMValueRef Global, LLVMLinkage Linkage) {
     GV->setLinkage(GlobalValue::ExternalWeakLinkage);
     break;
   case LLVMGhostLinkage:
-    GV->setLinkage(GlobalValue::GhostLinkage);
+    DEBUG(errs()
+          << "LLVMSetLinkage(): LLVMGhostLinkage is no longer supported.");
     break;
   case LLVMCommonLinkage:
     GV->setLinkage(GlobalValue::CommonLinkage);
@@ -1965,7 +1996,7 @@ LLVMValueRef LLVMBuildPtrDiff(LLVMBuilderRef B, LLVMValueRef LHS,
 
 LLVMModuleProviderRef
 LLVMCreateModuleProviderForExistingModule(LLVMModuleRef M) {
-  return wrap(new ExistingModuleProvider(unwrap(M)));
+  return reinterpret_cast<LLVMModuleProviderRef>(M);
 }
 
 void LLVMDisposeModuleProvider(LLVMModuleProviderRef MP) {
diff --git a/libclamav/c++/llvm/lib/VMCore/GVMaterializer.cpp b/libclamav/c++/llvm/lib/VMCore/GVMaterializer.cpp
new file mode 100644
index 0000000..f77a9c9
--- /dev/null
+++ b/libclamav/c++/llvm/lib/VMCore/GVMaterializer.cpp
@@ -0,0 +1,18 @@
+//===-- GVMaterializer.cpp - Base implementation for GV materializers -----===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// Minimal implementation of the abstract interface for materializing
+// GlobalValues.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/GVMaterializer.h"
+using namespace llvm;
+
+GVMaterializer::~GVMaterializer() {}
diff --git a/libclamav/c++/llvm/lib/VMCore/Globals.cpp b/libclamav/c++/llvm/lib/VMCore/Globals.cpp
index 94bf3de..f149c44 100644
--- a/libclamav/c++/llvm/lib/VMCore/Globals.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Globals.cpp
@@ -43,6 +43,19 @@ static bool removeDeadUsersOfConstant(const Constant *C) {
   return true;
 }
 
+bool GlobalValue::isMaterializable() const {
+  return getParent()->isMaterializable(this);
+}
+bool GlobalValue::isDematerializable() const {
+  return getParent()->isDematerializable(this);
+}
+bool GlobalValue::Materialize(std::string *ErrInfo) {
+  return getParent()->Materialize(this, ErrInfo);
+}
+void GlobalValue::Dematerialize() {
+  getParent()->Dematerialize(this);
+}
+
 /// removeDeadConstantUsers - If there are any dead constant users dangling
 /// off of this global value, remove them.  This method is useful for clients
 /// that want to check to see if a global is unused, but don't want to deal
diff --git a/libclamav/c++/llvm/lib/VMCore/IRBuilder.cpp b/libclamav/c++/llvm/lib/VMCore/IRBuilder.cpp
index 699bf0f..9f2786e 100644
--- a/libclamav/c++/llvm/lib/VMCore/IRBuilder.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/IRBuilder.cpp
@@ -19,7 +19,7 @@
 using namespace llvm;
 
 /// CreateGlobalString - Make a new global variable with an initializer that
-/// has array of i8 type filled in the the nul terminated string value
+/// has array of i8 type filled in with the nul terminated string value
 /// specified.  If Name is specified, it is the name of the global variable
 /// created.
 Value *IRBuilderBase::CreateGlobalString(const char *Str, const Twine &Name) {
diff --git a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
index e2b920e..4ec8295 100644
--- a/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Instructions.cpp
@@ -1786,6 +1786,18 @@ BinaryOperator *BinaryOperator::CreateNSWNeg(Value *Op, const Twine &Name,
   return BinaryOperator::CreateNSWSub(zero, Op, Name, InsertAtEnd);
 }
 
+BinaryOperator *BinaryOperator::CreateNUWNeg(Value *Op, const Twine &Name,
+                                             Instruction *InsertBefore) {
+  Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
+  return BinaryOperator::CreateNUWSub(zero, Op, Name, InsertBefore);
+}
+
+BinaryOperator *BinaryOperator::CreateNUWNeg(Value *Op, const Twine &Name,
+                                             BasicBlock *InsertAtEnd) {
+  Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
+  return BinaryOperator::CreateNUWSub(zero, Op, Name, InsertAtEnd);
+}
+
 BinaryOperator *BinaryOperator::CreateFNeg(Value *Op, const Twine &Name,
                                            Instruction *InsertBefore) {
   Value *zero = ConstantFP::getZeroValueForNegation(Op->getType());
@@ -2504,7 +2516,8 @@ CastInst::castIsValid(Instruction::CastOps op, Value *S, const Type *DstTy) {
 
   // Check for type sanity on the arguments
   const Type *SrcTy = S->getType();
-  if (!SrcTy->isFirstClassType() || !DstTy->isFirstClassType())
+  if (!SrcTy->isFirstClassType() || !DstTy->isFirstClassType() ||
+      SrcTy->isAggregateType() || DstTy->isAggregateType())
     return false;
 
   // Get the size of the types in bits, we'll need this later
@@ -2865,25 +2878,53 @@ ICmpInst::makeConstantRange(Predicate pred, const APInt &C) {
   default: llvm_unreachable("Invalid ICmp opcode to ConstantRange ctor!");
   case ICmpInst::ICMP_EQ: Upper++; break;
   case ICmpInst::ICMP_NE: Lower++; break;
-  case ICmpInst::ICMP_ULT: Lower = APInt::getMinValue(BitWidth); break;
-  case ICmpInst::ICMP_SLT: Lower = APInt::getSignedMinValue(BitWidth); break;
+  case ICmpInst::ICMP_ULT:
+    Lower = APInt::getMinValue(BitWidth);
+    // Check for an empty-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/false);
+    break;
+  case ICmpInst::ICMP_SLT:
+    Lower = APInt::getSignedMinValue(BitWidth);
+    // Check for an empty-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/false);
+    break;
   case ICmpInst::ICMP_UGT: 
     Lower++; Upper = APInt::getMinValue(BitWidth);        // Min = Next(Max)
+    // Check for an empty-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/false);
     break;
   case ICmpInst::ICMP_SGT:
     Lower++; Upper = APInt::getSignedMinValue(BitWidth);  // Min = Next(Max)
+    // Check for an empty-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/false);
     break;
   case ICmpInst::ICMP_ULE: 
     Lower = APInt::getMinValue(BitWidth); Upper++; 
+    // Check for a full-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/true);
     break;
   case ICmpInst::ICMP_SLE: 
     Lower = APInt::getSignedMinValue(BitWidth); Upper++; 
+    // Check for a full-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/true);
     break;
   case ICmpInst::ICMP_UGE:
     Upper = APInt::getMinValue(BitWidth);        // Min = Next(Max)
+    // Check for a full-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/true);
     break;
   case ICmpInst::ICMP_SGE:
     Upper = APInt::getSignedMinValue(BitWidth);  // Min = Next(Max)
+    // Check for a full-set condition.
+    if (Lower == Upper)
+      return ConstantRange(BitWidth, /*isFullSet=*/true);
     break;
   }
   return ConstantRange(Lower, Upper);
diff --git a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
index ccca789..62491d8 100644
--- a/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
+++ b/libclamav/c++/llvm/lib/VMCore/LLVMContextImpl.h
@@ -116,6 +116,10 @@ public:
     ConstantStruct, true /*largekey*/> StructConstantsTy;
   StructConstantsTy StructConstants;
   
+  typedef ConstantUniqueMap<Constant*, UnionType, ConstantUnion>
+      UnionConstantsTy;
+  UnionConstantsTy UnionConstants;
+  
   typedef ConstantUniqueMap<std::vector<Constant*>, VectorType,
                             ConstantVector> VectorConstantsTy;
   VectorConstantsTy VectorConstants;
@@ -159,12 +163,16 @@ public:
   TypeMap<PointerValType, PointerType> PointerTypes;
   TypeMap<FunctionValType, FunctionType> FunctionTypes;
   TypeMap<StructValType, StructType> StructTypes;
+  TypeMap<UnionValType, UnionType> UnionTypes;
   TypeMap<IntegerValType, IntegerType> IntegerTypes;
 
   // Opaque types are not structurally uniqued, so don't use TypeMap.
   typedef SmallPtrSet<const OpaqueType*, 8> OpaqueTypesTy;
   OpaqueTypesTy OpaqueTypes;
-  
+
+  /// Used as an abstract type that will never be resolved.
+  OpaqueType *const AlwaysOpaqueTy;
+
 
   /// ValueHandles - This map keeps track of all of the value handles that are
   /// watching a Value*.  The Value::HasValueHandle bit is used to know
@@ -196,7 +204,12 @@ public:
     Int8Ty(C, 8),
     Int16Ty(C, 16),
     Int32Ty(C, 32),
-    Int64Ty(C, 64) { }
+    Int64Ty(C, 64),
+    AlwaysOpaqueTy(new OpaqueType(C)) {
+    // Make sure the AlwaysOpaqueTy stays alive as long as the Context.
+    AlwaysOpaqueTy->addRef();
+    OpaqueTypes.insert(AlwaysOpaqueTy);
+  }
 
   ~LLVMContextImpl() {
     ExprConstants.freeConstants();
@@ -217,6 +230,7 @@ public:
         delete I->second;
     }
     MDNodeSet.clear();
+    AlwaysOpaqueTy->dropRef();
     for (OpaqueTypesTy::iterator I = OpaqueTypes.begin(), E = OpaqueTypes.end();
         I != E; ++I) {
       (*I)->AbstractTypeUsers.clear();
diff --git a/libclamav/c++/llvm/lib/VMCore/Module.cpp b/libclamav/c++/llvm/lib/VMCore/Module.cpp
index 503e708..001bb00 100644
--- a/libclamav/c++/llvm/lib/VMCore/Module.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Module.cpp
@@ -15,6 +15,7 @@
 #include "llvm/InstrTypes.h"
 #include "llvm/Constants.h"
 #include "llvm/DerivedTypes.h"
+#include "llvm/GVMaterializer.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/StringExtras.h"
@@ -56,7 +57,7 @@ template class llvm::SymbolTableListTraits<GlobalAlias, Module>;
 //
 
 Module::Module(StringRef MID, LLVMContext& C)
-  : Context(C), ModuleID(MID), DataLayout("")  {
+  : Context(C), Materializer(NULL), ModuleID(MID), DataLayout("")  {
   ValSymTab = new ValueSymbolTable();
   TypeSymTab = new TypeSymbolTable();
   NamedMDSymTab = new MDSymbolTable();
@@ -372,6 +373,52 @@ std::string Module::getTypeName(const Type *Ty) const {
 }
 
 //===----------------------------------------------------------------------===//
+// Methods to control the materialization of GlobalValues in the Module.
+//
+void Module::setMaterializer(GVMaterializer *GVM) {
+  assert(!Materializer &&
+         "Module already has a GVMaterializer.  Call MaterializeAllPermanently"
+         " to clear it out before setting another one.");
+  Materializer.reset(GVM);
+}
+
+bool Module::isMaterializable(const GlobalValue *GV) const {
+  if (Materializer)
+    return Materializer->isMaterializable(GV);
+  return false;
+}
+
+bool Module::isDematerializable(const GlobalValue *GV) const {
+  if (Materializer)
+    return Materializer->isDematerializable(GV);
+  return false;
+}
+
+bool Module::Materialize(GlobalValue *GV, std::string *ErrInfo) {
+  if (Materializer)
+    return Materializer->Materialize(GV, ErrInfo);
+  return false;
+}
+
+void Module::Dematerialize(GlobalValue *GV) {
+  if (Materializer)
+    return Materializer->Dematerialize(GV);
+}
+
+bool Module::MaterializeAll(std::string *ErrInfo) {
+  if (!Materializer)
+    return false;
+  return Materializer->MaterializeModule(this, ErrInfo);
+}
+
+bool Module::MaterializeAllPermanently(std::string *ErrInfo) {
+  if (MaterializeAll(ErrInfo))
+    return true;
+  Materializer.reset();
+  return false;
+}
+
+//===----------------------------------------------------------------------===//
 // Other module related stuff.
 //
 
diff --git a/libclamav/c++/llvm/lib/VMCore/ModuleProvider.cpp b/libclamav/c++/llvm/lib/VMCore/ModuleProvider.cpp
deleted file mode 100644
index cfff97c..0000000
--- a/libclamav/c++/llvm/lib/VMCore/ModuleProvider.cpp
+++ /dev/null
@@ -1,26 +0,0 @@
-//===-- ModuleProvider.cpp - Base implementation for module providers -----===//
-//
-//                     The LLVM Compiler Infrastructure
-//
-// This file is distributed under the University of Illinois Open Source
-// License. See LICENSE.TXT for details.
-//
-//===----------------------------------------------------------------------===//
-//
-// Minimal implementation of the abstract interface for providing a module.
-//
-//===----------------------------------------------------------------------===//
-
-#include "llvm/ModuleProvider.h"
-#include "llvm/Module.h"
-using namespace llvm;
-
-/// ctor - always have a valid Module
-///
-ModuleProvider::ModuleProvider() : TheModule(0) { }
-
-/// dtor - when we leave, we take our Module with us
-///
-ModuleProvider::~ModuleProvider() {
-  delete TheModule;
-}
diff --git a/libclamav/c++/llvm/lib/VMCore/Pass.cpp b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
index 45000f2..a782e5a 100644
--- a/libclamav/c++/llvm/lib/VMCore/Pass.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Pass.cpp
@@ -16,7 +16,6 @@
 #include "llvm/Pass.h"
 #include "llvm/PassManager.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/StringMap.h"
 #include "llvm/Support/Debug.h"
@@ -195,6 +194,9 @@ PassManagerType BasicBlockPass::getPotentialPassManagerType() const {
 //
 namespace {
 class PassRegistrar {
+  /// Guards the contents of this class.
+  mutable sys::SmartMutex<true> Lock;
+
   /// PassInfoMap - Keep track of the passinfo object for each registered llvm
   /// pass.
   typedef std::map<intptr_t, const PassInfo*> MapType;
@@ -214,16 +216,19 @@ class PassRegistrar {
 public:
   
   const PassInfo *GetPassInfo(intptr_t TI) const {
+    sys::SmartScopedLock<true> Guard(Lock);
     MapType::const_iterator I = PassInfoMap.find(TI);
     return I != PassInfoMap.end() ? I->second : 0;
   }
   
   const PassInfo *GetPassInfo(StringRef Arg) const {
+    sys::SmartScopedLock<true> Guard(Lock);
     StringMapType::const_iterator I = PassInfoStringMap.find(Arg);
     return I != PassInfoStringMap.end() ? I->second : 0;
   }
   
   void RegisterPass(const PassInfo &PI) {
+    sys::SmartScopedLock<true> Guard(Lock);
     bool Inserted =
       PassInfoMap.insert(std::make_pair(PI.getTypeInfo(),&PI)).second;
     assert(Inserted && "Pass registered multiple times!"); Inserted=Inserted;
@@ -231,6 +236,7 @@ public:
   }
   
   void UnregisterPass(const PassInfo &PI) {
+    sys::SmartScopedLock<true> Guard(Lock);
     MapType::iterator I = PassInfoMap.find(PI.getTypeInfo());
     assert(I != PassInfoMap.end() && "Pass registered but not in map!");
     
@@ -240,6 +246,7 @@ public:
   }
   
   void EnumerateWith(PassRegistrationListener *L) {
+    sys::SmartScopedLock<true> Guard(Lock);
     for (MapType::const_iterator I = PassInfoMap.begin(),
          E = PassInfoMap.end(); I != E; ++I)
       L->passEnumerate(I->second);
@@ -250,6 +257,7 @@ public:
   void RegisterAnalysisGroup(PassInfo *InterfaceInfo,
                              const PassInfo *ImplementationInfo,
                              bool isDefault) {
+    sys::SmartScopedLock<true> Guard(Lock);
     AnalysisGroupInfo &AGI = AnalysisGroupInfoMap[InterfaceInfo];
     assert(AGI.Implementations.count(ImplementationInfo) == 0 &&
            "Cannot add a pass to the same analysis group more than once!");
diff --git a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
index 0c0d64e..a1d554e 100644
--- a/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/PassManager.cpp
@@ -18,7 +18,6 @@
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/Timer.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/Support/ErrorHandling.h"
 #include "llvm/Support/ManagedStatic.h"
 #include "llvm/Support/raw_ostream.h"
@@ -1194,15 +1193,13 @@ bool BBPassManager::doFinalization(Function &F) {
 // FunctionPassManager implementation
 
 /// Create new Function pass manager
-FunctionPassManager::FunctionPassManager(ModuleProvider *P) {
+FunctionPassManager::FunctionPassManager(Module *m) : M(m) {
   FPM = new FunctionPassManagerImpl(0);
   // FPM is the top level manager.
   FPM->setTopLevelManager(FPM);
 
   AnalysisResolver *AR = new AnalysisResolver(*FPM);
   FPM->setResolver(AR);
-  
-  MP = P;
 }
 
 FunctionPassManager::~FunctionPassManager() {
@@ -1224,7 +1221,7 @@ void FunctionPassManager::add(Pass *P) {
 ///
 bool FunctionPassManager::run(Function &F) {
   std::string errstr;
-  if (MP->materializeFunction(&F, &errstr)) {
+  if (F.Materialize(&errstr)) {
     llvm_report_error("Error reading bitcode file: " + errstr);
   }
   return FPM->run(F);
@@ -1234,13 +1231,13 @@ bool FunctionPassManager::run(Function &F) {
 /// doInitialization - Run all of the initializers for the function passes.
 ///
 bool FunctionPassManager::doInitialization() {
-  return FPM->doInitialization(*MP->getModule());
+  return FPM->doInitialization(*M);
 }
 
 /// doFinalization - Run all of the finalizers for the function passes.
 ///
 bool FunctionPassManager::doFinalization() {
-  return FPM->doFinalization(*MP->getModule());
+  return FPM->doFinalization(*M);
 }
 
 //===----------------------------------------------------------------------===//
diff --git a/libclamav/c++/llvm/lib/VMCore/Type.cpp b/libclamav/c++/llvm/lib/VMCore/Type.cpp
index 044de4f..b1cdad5 100644
--- a/libclamav/c++/llvm/lib/VMCore/Type.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Type.cpp
@@ -50,8 +50,8 @@ void AbstractTypeUser::setType(Value *V, const Type *NewTy) {
 
 /// Because of the way Type subclasses are allocated, this function is necessary
 /// to use the correct kind of "delete" operator to deallocate the Type object.
-/// Some type objects (FunctionTy, StructTy) allocate additional space after 
-/// the space for their derived type to hold the contained types array of
+/// Some type objects (FunctionTy, StructTy, UnionTy) allocate additional space
+/// after the space for their derived type to hold the contained types array of
 /// PATypeHandles. Using this allocation scheme means all the PATypeHandles are
 /// allocated with the type object, decreasing allocations and eliminating the
 /// need for a std::vector to be used in the Type class itself. 
@@ -61,7 +61,8 @@ void Type::destroy() const {
   // Structures and Functions allocate their contained types past the end of
   // the type object itself. These need to be destroyed differently than the
   // other types.
-  if (isa<FunctionType>(this) || isa<StructType>(this)) {
+  if (isa<FunctionType>(this) || isa<StructType>(this) ||
+      isa<UnionType>(this)) {
     // First, make sure we destruct any PATypeHandles allocated by these
     // subclasses.  They must be manually destructed. 
     for (unsigned i = 0; i < NumContainedTys; ++i)
@@ -71,8 +72,10 @@ void Type::destroy() const {
     // to delete this as an array of char.
     if (isa<FunctionType>(this))
       static_cast<const FunctionType*>(this)->FunctionType::~FunctionType();
-    else
+    else if (isa<StructType>(this))
       static_cast<const StructType*>(this)->StructType::~StructType();
+    else
+      static_cast<const UnionType*>(this)->UnionType::~UnionType();
 
     // Finally, remove the memory as an array deallocation of the chars it was
     // constructed from.
@@ -226,7 +229,7 @@ bool Type::isSizedDerivedType() const {
   if (const VectorType *PTy = dyn_cast<VectorType>(this))
     return PTy->getElementType()->isSized();
 
-  if (!isa<StructType>(this)) 
+  if (!isa<StructType>(this) && !isa<UnionType>(this)) 
     return false;
 
   // Okay, our struct is sized if all of the elements are...
@@ -308,6 +311,32 @@ const Type *StructType::getTypeAtIndex(unsigned Idx) const {
   return ContainedTys[Idx];
 }
 
+
+bool UnionType::indexValid(const Value *V) const {
+  // Union indexes require 32-bit integer constants.
+  if (V->getType()->isInteger(32))
+    if (const ConstantInt *CU = dyn_cast<ConstantInt>(V))
+      return indexValid(CU->getZExtValue());
+  return false;
+}
+
+bool UnionType::indexValid(unsigned V) const {
+  return V < NumContainedTys;
+}
+
+// getTypeAtIndex - Given an index value into the type, return the type of the
+// element.  For a structure type, this must be a constant value...
+//
+const Type *UnionType::getTypeAtIndex(const Value *V) const {
+  unsigned Idx = (unsigned)cast<ConstantInt>(V)->getZExtValue();
+  return getTypeAtIndex(Idx);
+}
+
+const Type *UnionType::getTypeAtIndex(unsigned Idx) const {
+  assert(indexValid(Idx) && "Invalid structure index!");
+  return ContainedTys[Idx];
+}
+
 //===----------------------------------------------------------------------===//
 //                          Primitive 'Type' data
 //===----------------------------------------------------------------------===//
@@ -463,6 +492,23 @@ StructType::StructType(LLVMContext &C,
   setAbstract(isAbstract);
 }
 
+UnionType::UnionType(LLVMContext &C,const Type* const* Types, unsigned NumTypes)
+  : CompositeType(C, UnionTyID) {
+  ContainedTys = reinterpret_cast<PATypeHandle*>(this + 1);
+  NumContainedTys = NumTypes;
+  bool isAbstract = false;
+  for (unsigned i = 0; i < NumTypes; ++i) {
+    assert(Types[i] && "<null> type for union field!");
+    assert(isValidElementType(Types[i]) &&
+           "Invalid type for union element!");
+    new (&ContainedTys[i]) PATypeHandle(Types[i], this);
+    isAbstract |= Types[i]->isAbstract();
+  }
+
+  // Calculate whether or not this type is abstract
+  setAbstract(isAbstract);
+}
+
 ArrayType::ArrayType(const Type *ElType, uint64_t NumEl)
   : SequentialType(ArrayTyID, ElType) {
   NumElements = NumEl;
@@ -507,30 +553,7 @@ void DerivedType::dropAllTypeUses() {
   if (NumContainedTys != 0) {
     // The type must stay abstract.  To do this, we insert a pointer to a type
     // that will never get resolved, thus will always be abstract.
-    static Type *AlwaysOpaqueTy = 0;
-    static PATypeHolder* Holder = 0;
-    Type *tmp = AlwaysOpaqueTy;
-    if (llvm_is_multithreaded()) {
-      sys::MemoryFence();
-      if (!tmp) {
-        llvm_acquire_global_lock();
-        tmp = AlwaysOpaqueTy;
-        if (!tmp) {
-          tmp = OpaqueType::get(getContext());
-          PATypeHolder* tmp2 = new PATypeHolder(tmp);
-          sys::MemoryFence();
-          AlwaysOpaqueTy = tmp;
-          Holder = tmp2;
-        }
-      
-        llvm_release_global_lock();
-      }
-    } else if (!AlwaysOpaqueTy) {
-      AlwaysOpaqueTy = OpaqueType::get(getContext());
-      Holder = new PATypeHolder(AlwaysOpaqueTy);
-    } 
-        
-    ContainedTys[0] = AlwaysOpaqueTy;
+    ContainedTys[0] = getContext().pImpl->AlwaysOpaqueTy;
 
     // Change the rest of the types to be Int32Ty's.  It doesn't matter what we
     // pick so long as it doesn't point back to this type.  We choose something
@@ -667,6 +690,13 @@ static bool TypesEqual(const Type *Ty, const Type *Ty2,
       if (!TypesEqual(STy->getElementType(i), STy2->getElementType(i), EqTypes))
         return false;
     return true;
+  } else if (const UnionType *UTy = dyn_cast<UnionType>(Ty)) {
+    const UnionType *UTy2 = cast<UnionType>(Ty2);
+    if (UTy->getNumElements() != UTy2->getNumElements()) return false;
+    for (unsigned i = 0, e = UTy2->getNumElements(); i != e; ++i)
+      if (!TypesEqual(UTy->getElementType(i), UTy2->getElementType(i), EqTypes))
+        return false;
+    return true;
   } else if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
     const ArrayType *ATy2 = cast<ArrayType>(Ty2);
     return ATy->getNumElements() == ATy2->getNumElements() &&
@@ -924,10 +954,64 @@ StructType *StructType::get(LLVMContext &Context, const Type *type, ...) {
 }
 
 bool StructType::isValidElementType(const Type *ElemTy) {
-  return ElemTy->getTypeID() != VoidTyID && ElemTy->getTypeID() != LabelTyID &&
-         ElemTy->getTypeID() != MetadataTyID && !isa<FunctionType>(ElemTy);
+  return !ElemTy->isVoidTy() && !ElemTy->isLabelTy() &&
+         !ElemTy->isMetadataTy() && !isa<FunctionType>(ElemTy);
+}
+
+
+//===----------------------------------------------------------------------===//
+// Union Type Factory...
+//
+
+UnionType *UnionType::get(const Type* const* Types, unsigned NumTypes) {
+  assert(NumTypes > 0 && "union must have at least one member type!");
+  UnionValType UTV(Types, NumTypes);
+  UnionType *UT = 0;
+  
+  LLVMContextImpl *pImpl = Types[0]->getContext().pImpl;
+  
+  UT = pImpl->UnionTypes.get(UTV);
+    
+  if (!UT) {
+    // Value not found.  Derive a new type!
+    UT = (UnionType*) operator new(sizeof(UnionType) +
+                                   sizeof(PATypeHandle) * NumTypes);
+    new (UT) UnionType(Types[0]->getContext(), Types, NumTypes);
+    pImpl->UnionTypes.add(UTV, UT);
+  }
+#ifdef DEBUG_MERGE_TYPES
+  DEBUG(dbgs() << "Derived new type: " << *UT << "\n");
+#endif
+  return UT;
+}
+
+UnionType *UnionType::get(const Type *type, ...) {
+  va_list ap;
+  SmallVector<const llvm::Type*, 8> UnionFields;
+  va_start(ap, type);
+  while (type) {
+    UnionFields.push_back(type);
+    type = va_arg(ap, llvm::Type*);
+  }
+  unsigned NumTypes = UnionFields.size();
+  assert(NumTypes > 0 && "union must have at least one member type!");
+  return llvm::UnionType::get(&UnionFields[0], NumTypes);
 }
 
+bool UnionType::isValidElementType(const Type *ElemTy) {
+  return !ElemTy->isVoidTy() && !ElemTy->isLabelTy() &&
+         !ElemTy->isMetadataTy() && !ElemTy->isFunction();
+}
+
+int UnionType::getElementTypeIndex(const Type *ElemTy) const {
+  int index = 0;
+  for (UnionType::element_iterator I = element_begin(), E = element_end();
+       I != E; ++I, ++index) {
+     if (ElemTy == *I) return index;
+  }
+  
+  return -1;
+}
 
 //===----------------------------------------------------------------------===//
 // Pointer Type Factory...
@@ -1192,6 +1276,21 @@ void StructType::typeBecameConcrete(const DerivedType *AbsTy) {
 // concrete - this could potentially change us from an abstract type to a
 // concrete type.
 //
+void UnionType::refineAbstractType(const DerivedType *OldType,
+                                    const Type *NewType) {
+  LLVMContextImpl *pImpl = OldType->getContext().pImpl;
+  pImpl->UnionTypes.RefineAbstractType(this, OldType, NewType);
+}
+
+void UnionType::typeBecameConcrete(const DerivedType *AbsTy) {
+  LLVMContextImpl *pImpl = AbsTy->getContext().pImpl;
+  pImpl->UnionTypes.TypeBecameConcrete(this, AbsTy);
+}
+
+// refineAbstractType - Called when a contained type is found to be more
+// concrete - this could potentially change us from an abstract type to a
+// concrete type.
+//
 void PointerType::refineAbstractType(const DerivedType *OldType,
                                      const Type *NewType) {
   LLVMContextImpl *pImpl = OldType->getContext().pImpl;
diff --git a/libclamav/c++/llvm/lib/VMCore/TypesContext.h b/libclamav/c++/llvm/lib/VMCore/TypesContext.h
index 93a801b..02ab113 100644
--- a/libclamav/c++/llvm/lib/VMCore/TypesContext.h
+++ b/libclamav/c++/llvm/lib/VMCore/TypesContext.h
@@ -68,7 +68,7 @@ static unsigned getSubElementHash(const Type *Ty) {
 class IntegerValType {
   uint32_t bits;
 public:
-  IntegerValType(uint16_t numbits) : bits(numbits) {}
+  IntegerValType(uint32_t numbits) : bits(numbits) {}
 
   static IntegerValType get(const IntegerType *Ty) {
     return IntegerValType(Ty->getBitWidth());
@@ -180,6 +180,32 @@ public:
   }
 };
 
+// UnionValType - Define a class to hold the key that goes into the TypeMap
+//
+class UnionValType {
+  std::vector<const Type*> ElTypes;
+public:
+  UnionValType(const Type* const* Types, unsigned NumTypes)
+    : ElTypes(&Types[0], &Types[NumTypes]) {}
+
+  static UnionValType get(const UnionType *UT) {
+    std::vector<const Type *> ElTypes;
+    ElTypes.reserve(UT->getNumElements());
+    for (unsigned i = 0, e = UT->getNumElements(); i != e; ++i)
+      ElTypes.push_back(UT->getElementType(i));
+
+    return UnionValType(&ElTypes[0], ElTypes.size());
+  }
+
+  static unsigned hashTypeStructure(const UnionType *UT) {
+    return UT->getNumElements();
+  }
+
+  inline bool operator<(const UnionValType &UTV) const {
+    return (ElTypes < UTV.ElTypes);
+  }
+};
+
 // FunctionValType - Define a class to hold the key that goes into the TypeMap
 //
 class FunctionValType {
@@ -216,7 +242,6 @@ protected:
   ///
   std::multimap<unsigned, PATypeHolder> TypesByHash;
 
-public:
   ~TypeMapBase() {
     // PATypeHolder won't destroy non-abstract types.
     // We can't destroy them by simply iterating, because
@@ -236,6 +261,7 @@ public:
     }
   }
 
+public:
   void RemoveFromTypesByHash(unsigned Hash, const Type *Ty) {
     std::multimap<unsigned, PATypeHolder>::iterator I =
       TypesByHash.lower_bound(Hash);
@@ -281,7 +307,6 @@ class TypeMap : public TypeMapBase {
   std::map<ValType, PATypeHolder> Map;
 public:
   typedef typename std::map<ValType, PATypeHolder>::iterator iterator;
-  ~TypeMap() { print("ON EXIT"); }
 
   inline TypeClass *get(const ValType &V) {
     iterator I = Map.find(V);
diff --git a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
index 76d9d43..d0e8d30 100644
--- a/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
+++ b/libclamav/c++/llvm/lib/VMCore/Verifier.cpp
@@ -47,7 +47,6 @@
 #include "llvm/IntrinsicInst.h"
 #include "llvm/Metadata.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/Pass.h"
 #include "llvm/PassManager.h"
 #include "llvm/TypeSymbolTable.h"
@@ -413,10 +412,10 @@ void Verifier::visit(Instruction &I) {
 
 void Verifier::visitGlobalValue(GlobalValue &GV) {
   Assert1(!GV.isDeclaration() ||
+          GV.isMaterializable() ||
           GV.hasExternalLinkage() ||
           GV.hasDLLImportLinkage() ||
           GV.hasExternalWeakLinkage() ||
-          GV.hasGhostLinkage() ||
           (isa<GlobalAlias>(GV) &&
            (GV.hasLocalLinkage() || GV.hasWeakLinkage())),
   "Global is external, but doesn't have external or dllimport or weak linkage!",
@@ -648,9 +647,11 @@ void Verifier::visitFunction(Function &F) {
               "Function takes metadata but isn't an intrinsic", I, &F);
   }
 
-  if (F.isDeclaration()) {
+  if (F.isMaterializable()) {
+    // Function has a body somewhere we can't see.
+  } else if (F.isDeclaration()) {
     Assert1(F.hasExternalLinkage() || F.hasDLLImportLinkage() ||
-            F.hasExternalWeakLinkage() || F.hasGhostLinkage(),
+            F.hasExternalWeakLinkage(),
             "invalid linkage type for function declaration", &F);
   } else {
     // Verify that this function (which has a body) is not named "llvm.*".  It
@@ -1913,12 +1914,10 @@ bool llvm::verifyFunction(const Function &f, VerifierFailureAction action) {
   Function &F = const_cast<Function&>(f);
   assert(!F.isDeclaration() && "Cannot verify external functions");
 
-  ExistingModuleProvider MP(F.getParent());
-  FunctionPassManager FPM(&MP);
+  FunctionPassManager FPM(F.getParent());
   Verifier *V = new Verifier(action);
   FPM.add(V);
   FPM.run(F);
-  MP.releaseModule();
   return V->Broken;
 }
 
diff --git a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/alias.ll b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/alias.ll
index a5f504b..97be3fd 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/alias.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/alias.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -analyze -lda | FileCheck %s
+; RUN: opt < %s -analyze -lda | FileCheck %s
 
 ;; x[5] = x[6] // with x being a pointer passed as argument
 
diff --git a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-strong.ll b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-strong.ll
index 3270895..36ac153 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-strong.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-strong.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -analyze -lda | FileCheck %s
+; RUN: opt < %s -analyze -lda | FileCheck %s
 
 @x = common global [256 x i32] zeroinitializer, align 4
 @y = common global [256 x i32] zeroinitializer, align 4
diff --git a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-crossing.ll b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-crossing.ll
index 3d9f258..a7f9bda 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-crossing.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-crossing.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -analyze -lda | FileCheck %s
+; RUN: opt < %s -analyze -lda | FileCheck %s
 
 @x = common global [256 x i32] zeroinitializer, align 4
 @y = common global [256 x i32] zeroinitializer, align 4
diff --git a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-zero.ll b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-zero.ll
index 4433138..e75aefd 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-zero.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/siv-weak-zero.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -analyze -lda | FileCheck %s
+; RUN: opt < %s -analyze -lda | FileCheck %s
 
 @x = common global [256 x i32] zeroinitializer, align 4
 @y = common global [256 x i32] zeroinitializer, align 4
diff --git a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/ziv.ll b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/ziv.ll
index 0a93762..ba45948 100644
--- a/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/ziv.ll
+++ b/libclamav/c++/llvm/test/Analysis/LoopDependenceAnalysis/ziv.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -analyze -lda | FileCheck %s
+; RUN: opt < %s -analyze -lda | FileCheck %s
 
 @x = common global [256 x i32] zeroinitializer, align 4
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll
index ba57662..7ff130f 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-07-15-NegativeStride.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {Loop %bb: backedge-taken count is 100}
 ; PR1533
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-08-06-Unsigned.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-08-06-Unsigned.ll
index ce8f725..ab96243 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-08-06-Unsigned.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-08-06-Unsigned.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output | grep {Loop %bb: backedge-taken count is (-1 + (-1 \\* %x) + %y)}
+; RUN: opt < %s -scalar-evolution -analyze | grep {Loop %bb: backedge-taken count is (-1 + (-1 \\* %x) + %y)}
 ; PR1597
 
 define i32 @f(i32 %x, i32 %y) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-09-27-LargeStepping.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-09-27-LargeStepping.ll
index 817090f..b678fee 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-09-27-LargeStepping.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-09-27-LargeStepping.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {backedge-taken count is 13}
 ; PR1706
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
index 27fe714..c12721d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2007-11-18-OrInstruction.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | FileCheck %s
+; RUN: opt < %s -analyze -scalar-evolution | FileCheck %s
 ; PR1810
 
 define void @fun() {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-11-ReversedCondition.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-11-ReversedCondition.ll
index 6685778..fe3a7f4 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-11-ReversedCondition.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-11-ReversedCondition.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output | grep {Loop %header: backedge-taken count is (0 smax %n)}
+; RUN: opt < %s -scalar-evolution -analyze | grep {Loop %header: backedge-taken count is (0 smax %n)}
 
 define void @foo(i32 %n) {
 entry:
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-12-SMAXTripCount.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-12-SMAXTripCount.ll
index addf346..4f14a0d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-12-SMAXTripCount.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-12-SMAXTripCount.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output | grep {Loop %loop: backedge-taken count is (100 + (-100 smax %n))}
+; RUN: opt < %s -scalar-evolution -analyze | grep {Loop %loop: backedge-taken count is (100 + (-100 smax %n))}
 ; PR2002
 
 define void @foo(i8 %n) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-15-UMax.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-15-UMax.ll
index bf9f4a9..52c7985 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-15-UMax.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-02-15-UMax.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep umax
+; RUN: opt < %s -analyze -scalar-evolution | grep umax
 ; PR2003
 
 define i32 @foo(i32 %n) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-05-25-NegativeStepToZero.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-05-25-NegativeStepToZero.ll
index 8d15b77..bcc124d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-05-25-NegativeStepToZero.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-05-25-NegativeStepToZero.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {backedge-taken count is 61}
 ; PR2364
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll
index 850b670..9db9b71 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect1.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& not grep smax
+; RUN: opt < %s -analyze -scalar-evolution |& not grep smax
 ; PR2261
 
 @lut = common global [256 x i8] zeroinitializer, align 32		; <[256 x i8]*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect2.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect2.ll
index 59e9fda..1847665 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect2.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-12-UnneededSelect2.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& not grep smax
+; RUN: opt < %s -analyze -scalar-evolution |& not grep smax
 ; PR2070
 
 define i32 @a(i32 %x) nounwind  {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-InfiniteLoop.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-InfiniteLoop.ll
index 989ac51..1865c05 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-InfiniteLoop.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-InfiniteLoop.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep Unpredictable
 ; PR2088
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-WrappingIV.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-WrappingIV.ll
index 803c7d1..86e07ec 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-WrappingIV.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-19-WrappingIV.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {backedge-taken count is 113}
 ; PR2088
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
index 37b5b94..75bd634 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SGTTripCount.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2607
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
index d54b3b4..1626c1f 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-07-29-SMinExpr.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2607
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
index 06200ae..3b31d79 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-IVOverflow.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2621
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
index f3c703a..b296a19 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-08-04-LongAddRec.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | FileCheck %s
 ; PR2621
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-02-QuadraticCrash.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-02-QuadraticCrash.ll
index 9daff99..7722122 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-02-QuadraticCrash.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-02-QuadraticCrash.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output
+; RUN: opt < %s -analyze -scalar-evolution
 ; PR1827
 
 declare void @use(i32)
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-15-CubicOOM.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-15-CubicOOM.ll
index 5a2c366..2e2aabc 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-15-CubicOOM.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-15-CubicOOM.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output
+; RUN: opt < %s -analyze -scalar-evolution
 ; PR2602
 
 define i32 @a() nounwind  {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-LessThanOrEqual.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-LessThanOrEqual.ll
index f9dd40f..06637b5 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-LessThanOrEqual.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-LessThanOrEqual.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& \
+; RUN: opt < %s -analyze -scalar-evolution |& \
 ; RUN: grep {Loop %bb: backedge-taken count is (7 + (-1 \\* %argc))}
 ; XFAIL: *
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride1.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride1.ll
index 9ee781f..db527fe 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride1.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride1.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:  | grep {Loop %bb: Unpredictable backedge-taken count\\.}
 
 ; ScalarEvolution can't compute a trip count because it doesn't know if
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride2.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride2.ll
index bcbe92f..102acc6 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride2.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-11-18-Stride2.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& grep {/u 3}
+; RUN: opt < %s -analyze -scalar-evolution |& grep {/u 3}
 ; XFAIL: *
 
 define i32 @f(i32 %x) nounwind readnone {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll
index 2ee107a..226221b 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-08-FiniteSGE.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep {backedge-taken count is 255}
+; RUN: opt < %s -analyze -scalar-evolution | grep {backedge-taken count is 255}
 ; XFAIL: *
 
 define i32 @foo(i32 %x, i32 %y, i32* %lam, i32* %alp) nounwind {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-11-SMaxOverflow.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-11-SMaxOverflow.ll
index 0cfd84c..33a7479 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-11-SMaxOverflow.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-11-SMaxOverflow.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep {0 smax}
+; RUN: opt < %s -analyze -scalar-evolution | grep {0 smax}
 ; XFAIL: *
 
 define i32 @f(i32 %c.idx.val) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-14-StrideAndSigned.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-14-StrideAndSigned.ll
index 4ec358c..8152e98 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-14-StrideAndSigned.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-14-StrideAndSigned.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& \
+; RUN: opt < %s -analyze -scalar-evolution |& \
 ; RUN: grep {(((-1 \\* %i0) + (100005 smax %i0)) /u 5)}
 ; XFAIL: *
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-15-DontUseSDiv.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-15-DontUseSDiv.ll
index 1fe1068..3eaa492 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-15-DontUseSDiv.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2008-12-15-DontUseSDiv.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output |& grep {/u 5}
+; RUN: opt < %s -analyze -scalar-evolution |& grep {/u 5}
 ; XFAIL: *
 
 define i8 @foo0(i8 %i0) nounwind {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-01-02-SignedNegativeStride.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-01-02-SignedNegativeStride.ll
index 9d13695..cc2a2e4 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-01-02-SignedNegativeStride.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-01-02-SignedNegativeStride.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | not grep {/u -1}
+; RUN: opt < %s -analyze -scalar-evolution | not grep {/u -1}
 ; PR3275
 
 @g_16 = external global i16		; <i16*> [#uses=3]
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-04-22-TruncCast.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-04-22-TruncCast.ll
index 78a7fd0..c2e108a 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-04-22-TruncCast.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-04-22-TruncCast.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep {(trunc i} | not grep ext
+; RUN: opt < %s -analyze -scalar-evolution | grep {(trunc i} | not grep ext
 
 define i16 @test1(i8 %x) {
   %A = sext i8 %x to i32
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
index e81530e..dc7bd29 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/2009-05-09-PointerEdgeCount.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep {count is 2}
+; RUN: opt < %s -analyze -scalar-evolution | grep {count is 2}
 ; PR3171
 target datalayout = "E-p:64:64:64-a0:0:8-f32:32:32-f64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-v64:64:64-v128:128:128"
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll
index fcc6fc3..9573aed 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/SolveQuadraticEquation.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {backedge-taken count is 100}
 ; PR1101
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/and-xor.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/and-xor.ll
index 90d947f..1772573 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/and-xor.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/and-xor.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:   | grep {\\-->  (zext} | count 2
 
 define i32 @foo(i32 %x) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-infinite-recursion-0.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-infinite-recursion-0.ll
index f638eb3..7eeb308 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-infinite-recursion-0.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-infinite-recursion-0.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output
+; RUN: opt < %s -analyze -scalar-evolution
 ; PR4537
 
 ; ModuleID = 'b.bc'
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll
index 55d3bd5..24275f9 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/avoid-smax-0.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output | grep {Loop %bb3: backedge-taken count is (-1 + %n)}
+; RUN: opt < %s -scalar-evolution -analyze | grep {Loop %bb3: backedge-taken count is (-1 + %n)}
 
 ; We don't want to use a max in the trip count expression in
 ; this testcase.
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/div-overflow.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/div-overflow.ll
index 0c01044..4f6f1e2 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/div-overflow.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/div-overflow.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:  | grep {\\-->  ((-128 \\* %a) /u -128)}
 
 ; Don't let ScalarEvolution fold this div away.
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/do-loop.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/do-loop.ll
index f8d7da7..6e3295a 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/do-loop.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/do-loop.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep smax
+; RUN: opt < %s -analyze -scalar-evolution | grep smax
 ; PR1614
 
 define i32 @f(i32 %x, i32 %y) {
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll
index a4fdcd0..a8966be 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/max-trip-count.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   | grep {\{%d,+,\[^\{\}\]\*\}<%bb>}
 
 ; ScalarEvolution should be able to understand the loop and eliminate the casts.
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll
index ed97de6..4cd9a6d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw-offset.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -S -analyze -scalar-evolution -disable-output | FileCheck %s
+; RUN: opt < %s -S -analyze -scalar-evolution | FileCheck %s
 
 ; ScalarEvolution should be able to fold away the sign-extensions
 ; on this loop with a primary induction variable incremented with
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw.ll
index e4f2b29..456f3f0 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/nsw.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep { -->  {.*,+,.*}<%bb>} | count 8
+; RUN: opt < %s -analyze -scalar-evolution | grep { -->  {.*,+,.*}<%bb>} | count 8
 
 ; The addrecs in this loop are analyzable only by using nsw information.
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/pointer-sign-bits.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/pointer-sign-bits.ll
index 4de006c..b2cec2d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/pointer-sign-bits.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/pointer-sign-bits.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output
+; RUN: opt < %s -analyze -scalar-evolution
 
 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:32:32"
   %JavaObject = type { [0 x i32 (...)*]*, i8* }
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll
index 4487822..23e1210 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-inreg.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output > %t
+; RUN: opt < %s -analyze -scalar-evolution > %t
 ; RUN: grep {sext i57 \{0,+,199\}<%bb> to i64} %t | count 1
 ; RUN: grep {sext i59 \{0,+,199\}<%bb> to i64} %t | count 1
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll
index 05983c1..2af794f 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-0.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -scalar-evolution -analyze \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:  | grep { -->  \{-128,+,1\}<%bb1>		Exits: 127} | count 5
 
 ; Convert (sext {-128,+,1}) to {sext(-128),+,sext(1)}, since the
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll
index 0bf51d9..9063cbb 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-1.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -disable-output -scalar-evolution -analyze \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:  | grep { -->  (sext i. \{.\*,+,.\*\}<%bb1> to i64)} | count 5
 
 ; Don't convert (sext {...,+,...}) to {sext(...),+,sext(...)} in cases
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll
index fc39cae..97e252c 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/sext-iv-2.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | FileCheck %s
+; RUN: opt < %s -analyze -scalar-evolution | FileCheck %s
 
 ; CHECK: %tmp3 = sext i8 %tmp2 to i32
 ; CHECK: -->  (sext i8 {0,+,1}<%bb1> to i32)   Exits: -1
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/smax.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/smax.ll
index 39de8d6..15dd744 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/smax.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/smax.ll
@@ -1,5 +1,5 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep smax | count 2
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | grep \
+; RUN: opt < %s -analyze -scalar-evolution | grep smax | count 2
+; RUN: opt < %s -analyze -scalar-evolution | grep \
 ; RUN:     {%. smax %. smax %.}
 ; PR1614
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count.ll
index 66cc304..d750d4a 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   -scalar-evolution-max-iterations=0 | grep {backedge-taken count is 10000}
 ; PR1101
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count2.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count2.ll
index bbe6435..79f3161 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count2.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count2.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output | \
+; RUN: opt < %s -analyze -scalar-evolution | \
 ; RUN:   grep {backedge-taken count is 4}
 ; PR1101
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count3.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count3.ll
index 7d8e0c6..10b798b 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count3.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count3.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -analyze -disable-output \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:  | grep {Loop %bb3\\.i: Unpredictable backedge-taken count\\.}
 
 ; ScalarEvolution can't compute a trip count because it doesn't know if
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count4.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count4.ll
index e8d59cf..116f62d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count4.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count4.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   | grep {sext.*trunc.*Exits: 11}
 
 ; ScalarEvolution should be able to compute a loop exit value for %indvar.i8.
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count5.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count5.ll
index 2512a96..1194a1d 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count5.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count5.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output > %t
+; RUN: opt < %s -analyze -scalar-evolution > %t
 ; RUN: grep sext %t | count 2
 ; RUN: not grep {(sext} %t
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count6.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count6.ll
index 5833286..956fb81 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count6.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count6.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -disable-output -scalar-evolution \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:  | grep {max backedge-taken count is 1\$}
 
 @mode_table = global [4 x i32] zeroinitializer          ; <[4 x i32]*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count7.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count7.ll
index 74c856f..a8b797e 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count7.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count7.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:   | grep {Loop %bb7.i: Unpredictable backedge-taken count\\.}
 
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128"
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count8.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count8.ll
index 5063342..ac5ee60 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count8.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/trip-count8.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:  | grep {Loop %for\\.body: backedge-taken count is (-1 + \[%\]ecx)}
 ; PR4599
 
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/xor-and.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/xor-and.ll
index c8339d7..c0530bb 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/xor-and.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/xor-and.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -scalar-evolution -disable-output -analyze \
+; RUN: opt < %s -scalar-evolution -analyze \
 ; RUN:   | grep {\\-->  (zext i4 (-8 + (trunc i64 (8 \\* %x) to i4)) to i64)}
 
 ; ScalarEvolution shouldn't try to analyze %z into something like
diff --git a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/zext-wrap.ll b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/zext-wrap.ll
index c4ac5de..38d15ff 100644
--- a/libclamav/c++/llvm/test/Analysis/ScalarEvolution/zext-wrap.ll
+++ b/libclamav/c++/llvm/test/Analysis/ScalarEvolution/zext-wrap.ll
@@ -1,4 +1,4 @@
-; RUN: opt < %s -analyze -scalar-evolution -disable-output \
+; RUN: opt < %s -analyze -scalar-evolution \
 ; RUN:  | FileCheck %s
 ; PR4569
 
diff --git a/libclamav/c++/llvm/test/Assembler/2010-01-06-UnionType.ll b/libclamav/c++/llvm/test/Assembler/2010-01-06-UnionType.ll
new file mode 100644
index 0000000..37130d6
--- /dev/null
+++ b/libclamav/c++/llvm/test/Assembler/2010-01-06-UnionType.ll
@@ -0,0 +1,3 @@
+; RUN: llvm-as %s -o /dev/null
+
+%X = type union { i32, i32* }
diff --git a/libclamav/c++/llvm/test/Assembler/2010-02-05-FunctionLocalMetadataBecomesNull.ll b/libclamav/c++/llvm/test/Assembler/2010-02-05-FunctionLocalMetadataBecomesNull.ll
new file mode 100644
index 0000000..b2256b1
--- /dev/null
+++ b/libclamav/c++/llvm/test/Assembler/2010-02-05-FunctionLocalMetadataBecomesNull.ll
@@ -0,0 +1,25 @@
+; RUN: opt -std-compile-opts < %s | llvm-dis | not grep badref 
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
+target triple = "x86_64-apple-darwin10.2"
+
+%struct.anon = type { i32, i32 }
+%struct.test = type { i64, %struct.anon, %struct.test* }
+
+ at TestArrayPtr = global %struct.test* getelementptr inbounds ([10 x %struct.test]* @TestArray, i64 0, i64 3) ; <%struct.test**> [#uses=1]
+ at TestArray = common global [10 x %struct.test] zeroinitializer, align 32 ; <[10 x %struct.test]*> [#uses=2]
+
+define i32 @main() nounwind readonly {
+  %diff1 = alloca i64                             ; <i64*> [#uses=2]
+  call void @llvm.dbg.declare(metadata !{i64* %diff1}, metadata !0)
+  store i64 72, i64* %diff1, align 8
+  %v1 = load %struct.test** @TestArrayPtr, align 8 ; <%struct.test*> [#uses=1]
+  %v2 = ptrtoint %struct.test* %v1 to i64 ; <i64> [#uses=1]
+  %v3 = sub i64 %v2, ptrtoint ([10 x %struct.test]* @TestArray to i64) ; <i64> [#uses=1]
+  store i64 %v3, i64* %diff1, align 8
+  ret i32 4
+}
+
+declare void @llvm.dbg.declare(metadata, metadata) nounwind readnone
+
+!0 = metadata !{i32 459008, metadata !0, metadata !0, metadata !0, i32 38, metadata !0} ; [ DW_TAG_auto_variable ]
diff --git a/libclamav/c++/llvm/test/Assembler/functionlocal-metadata.ll b/libclamav/c++/llvm/test/Assembler/functionlocal-metadata.ll
index 16bc9d0..216587d 100644
--- a/libclamav/c++/llvm/test/Assembler/functionlocal-metadata.ll
+++ b/libclamav/c++/llvm/test/Assembler/functionlocal-metadata.ll
@@ -2,6 +2,8 @@
 
 define void @Foo(i32 %a, i32 %b) {
 entry:
+  call void @llvm.dbg.value(metadata !{ i32* %1 }, i64 16, metadata !"bar")
+; CHECK: call void @llvm.dbg.value(metadata !{i32* %1}, i64 16, metadata !"bar")
   %0 = add i32 %a, 1                              ; <i32> [#uses=1]
   %two = add i32 %b, %0                           ; <i32> [#uses=0]
   %1 = alloca i32                                 ; <i32*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll
index 8256386..90a5bd2 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/2009-10-30.ll
@@ -5,8 +5,8 @@
 define void @f(i32 %a1, i32 %a2, i32 %a3, i32 %a4, i32 %a5, ...) {
 entry:
 ;CHECK: sub	sp, sp, #4
-;CHECK: add	r0, sp, #8
-;CHECK: str	r0, [sp], #+4
+;CHECK: add	r{{[0-9]+}}, sp, #8
+;CHECK: str	r{{[0-9]+}}, [sp], #+4
 ;CHECK: bx	lr
 	%ap = alloca i8*, align 4
 	%ap1 = bitcast i8** %ap to i8*
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/aliases.ll b/libclamav/c++/llvm/test/CodeGen/ARM/aliases.ll
index b2c0314..31c5007 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/aliases.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/aliases.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=arm-linux-gnueabi -o %t
-; RUN: grep set %t   | count 5
+; RUN: grep { = } %t   | count 5
 ; RUN: grep globl %t | count 4
 ; RUN: grep weak %t  | count 1
 
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/align.ll b/libclamav/c++/llvm/test/CodeGen/ARM/align.ll
index 492d7af..d4d0128 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/align.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/align.ll
@@ -8,31 +8,31 @@
 ; no alignment
 
 @c = global i16 2
-;ELF: .align 2
+;ELF: .align 1
 ;ELF: c:
 ;DARWIN: .align 1
 ;DARWIN: _c:
 
 @d = global i32 3
-;ELF: .align 4
+;ELF: .align 2
 ;ELF: d:
 ;DARWIN: .align 2
 ;DARWIN: _d:
 
 @e = global i64 4
-;ELF: .align 8
+;ELF: .align 3
 ;ELF: e
 ;DARWIN: .align 2
 ;DARWIN: _e:
 
 @f = global float 5.0
-;ELF: .align 4
+;ELF: .align 2
 ;ELF: f:
 ;DARWIN: .align 2
 ;DARWIN: _f:
 
 @g = global double 6.0
-;ELF: .align 8
+;ELF: .align 3
 ;ELF: g:
 ;DARWIN: .align 2
 ;DARWIN: _g:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll b/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
index 72ec8ef..52ab871 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/arm-negative-stride.ll
@@ -1,7 +1,32 @@
 ; RUN: llc < %s -march=arm | FileCheck %s
 
+; This loop is rewritten with an indvar which counts down, which
+; frees up a register from holding the trip count.
+
 define void @test(i32* %P, i32 %A, i32 %i) nounwind {
 entry:
+; CHECK: str r1, [{{r.*}}, +{{r.*}}, lsl #2]
+        icmp eq i32 %i, 0               ; <i1>:0 [#uses=1]
+        br i1 %0, label %return, label %bb
+
+bb:             ; preds = %bb, %entry
+        %indvar = phi i32 [ 0, %entry ], [ %indvar.next, %bb ]          ; <i32> [#uses=2]
+        %i_addr.09.0 = sub i32 %i, %indvar              ; <i32> [#uses=1]
+        %tmp2 = getelementptr i32* %P, i32 %i_addr.09.0         ; <i32*> [#uses=1]
+        store i32 %A, i32* %tmp2
+        %indvar.next = add i32 %indvar, 1               ; <i32> [#uses=2]
+        icmp eq i32 %indvar.next, %i            ; <i1>:1 [#uses=1]
+        br i1 %1, label %return, label %bb
+
+return:         ; preds = %bb, %entry
+        ret void
+}
+
+; This loop has a non-address use of the count-up indvar, so
+; it'll remain. Now the original store uses a negative-stride address.
+
+define void @test_with_forced_iv(i32* %P, i32 %A, i32 %i) nounwind {
+entry:
 ; CHECK: str r1, [{{r.*}}, -{{r.*}}, lsl #2]
         icmp eq i32 %i, 0               ; <i1>:0 [#uses=1]
         br i1 %0, label %return, label %bb
@@ -11,6 +36,7 @@ bb:             ; preds = %bb, %entry
         %i_addr.09.0 = sub i32 %i, %indvar              ; <i32> [#uses=1]
         %tmp2 = getelementptr i32* %P, i32 %i_addr.09.0         ; <i32*> [#uses=1]
         store i32 %A, i32* %tmp2
+        store i32 %indvar, i32* null
         %indvar.next = add i32 %indvar, 1               ; <i32> [#uses=2]
         icmp eq i32 %indvar.next, %i            ; <i1>:1 [#uses=1]
         br i1 %1, label %return, label %bb
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll b/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll
index 83849f4..886c0d5 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/globals.ll
@@ -67,9 +67,9 @@ define i32 @test1() {
 ; LinuxPIC: 	ldr r0, [r0]
 ; LinuxPIC: 	bx lr
 
-; LinuxPIC: .align 4
+; LinuxPIC: .align 2
 ; LinuxPIC: .LCPI1_0:
 ; LinuxPIC:     .long _GLOBAL_OFFSET_TABLE_-(.LPC1_0+8)
-; LinuxPIC: .align 4
+; LinuxPIC: .align 2
 ; LinuxPIC: .LCPI1_1:
 ; LinuxPIC:     .long	G(GOT)
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/iabs.ll b/libclamav/c++/llvm/test/CodeGen/ARM/iabs.ll
index 1054f27..63808b2 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/iabs.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/iabs.ll
@@ -1,5 +1,4 @@
-; RUN: llc < %s -march=arm -stats |& \
-; RUN:   grep {3 .*Number of machine instrs printed}
+; RUN: llc < %s -march=arm | FileCheck %s
 
 ;; Integer absolute value, should produce something as good as: ARM:
 ;;   add r3, r0, r0, asr #31
@@ -11,5 +10,7 @@ define i32 @test(i32 %a) {
         %b = icmp sgt i32 %a, -1
         %abs = select i1 %b, i32 %a, i32 %tmp1neg
         ret i32 %abs
+; CHECK:   add r1, r0, r0, asr #31
+; CHECK:   eor r0, r1, r0, asr #31
+; CHECK:  bx lr
 }
-
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll b/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
index 688b7bc..76332cc 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/long_shift.ll
@@ -23,10 +23,10 @@ define i32 @f1(i64 %x, i64 %y) {
 define i32 @f2(i64 %x, i64 %y) {
 ; CHECK: f2
 ; CHECK:      mov     r0, r0, lsr r2
-; CHECK-NEXT: rsb     r3, r2, #32
+; CHECK-NEXT: rsb     r12, r2, #32
 ; CHECK-NEXT: sub     r2, r2, #32
 ; CHECK-NEXT: cmp     r2, #0
-; CHECK-NEXT: orr     r0, r0, r1, lsl r3
+; CHECK-NEXT: orr     r0, r0, r1, lsl r12
 ; CHECK-NEXT: movge   r0, r1, asr r2
 	%a = ashr i64 %x, %y
 	%b = trunc i64 %a to i32
@@ -36,10 +36,10 @@ define i32 @f2(i64 %x, i64 %y) {
 define i32 @f3(i64 %x, i64 %y) {
 ; CHECK: f3
 ; CHECK:      mov     r0, r0, lsr r2
-; CHECK-NEXT: rsb     r3, r2, #32
+; CHECK-NEXT: rsb     r12, r2, #32
 ; CHECK-NEXT: sub     r2, r2, #32
 ; CHECK-NEXT: cmp     r2, #0
-; CHECK-NEXT: orr     r0, r0, r1, lsl r3
+; CHECK-NEXT: orr     r0, r0, r1, lsl r12
 ; CHECK-NEXT: movge   r0, r1, lsr r2
 	%a = lshr i64 %x, %y
 	%b = trunc i64 %a to i32
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/lsr-code-insertion.ll b/libclamav/c++/llvm/test/CodeGen/ARM/lsr-code-insertion.ll
index 507ec2c..1bbb96d 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/lsr-code-insertion.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/lsr-code-insertion.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -stats |& grep {40.*Number of machine instrs printed}
-; RUN: llc < %s -stats |& grep {.*Number of re-materialization}
+; RUN: llc < %s -stats |& grep {39.*Number of machine instrs printed}
+; RUN: llc < %s -stats |& not grep {.*Number of re-materialization}
 ; This test really wants to check that the resultant "cond_true" block only 
 ; has a single store in it, and that cond_true55 only has code to materialize 
 ; the constant and do a store.  We do *not* want something like this:
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll b/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll
deleted file mode 100644
index 1a871d2..0000000
--- a/libclamav/c++/llvm/test/CodeGen/ARM/remat-2.ll
+++ /dev/null
@@ -1,65 +0,0 @@
-; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 -stats -info-output-file - | grep "Number of re-materialization"
-
-define arm_apcscc i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
-entry:
-  br i1 undef, label %smvp.exit, label %bb.i3
-
-bb.i3:                                            ; preds = %bb.i3, %bb134
-  br i1 undef, label %smvp.exit, label %bb.i3
-
-smvp.exit:                                        ; preds = %bb.i3
-  %0 = fmul double undef, 2.400000e-03            ; <double> [#uses=2]
-  br i1 undef, label %bb138.preheader, label %bb159
-
-bb138.preheader:                                  ; preds = %smvp.exit
-  br label %bb138
-
-bb138:                                            ; preds = %bb138, %bb138.preheader
-  br i1 undef, label %bb138, label %bb145.loopexit
-
-bb142:                                            ; preds = %bb.nph218.bb.nph218.split_crit_edge, %phi0.exit
-  %1 = fmul double undef, -1.200000e-03           ; <double> [#uses=1]
-  %2 = fadd double undef, %1                      ; <double> [#uses=1]
-  %3 = fmul double %2, undef                      ; <double> [#uses=1]
-  %4 = fsub double 0.000000e+00, %3               ; <double> [#uses=1]
-  br i1 %14, label %phi1.exit, label %bb.i35
-
-bb.i35:                                           ; preds = %bb142
-  %5 = call arm_apcscc  double @sin(double %15) nounwind readonly ; <double> [#uses=1]
-  %6 = fmul double %5, 0x4031740AFA84AD8A         ; <double> [#uses=1]
-  %7 = fsub double 1.000000e+00, undef            ; <double> [#uses=1]
-  %8 = fdiv double %7, 6.000000e-01               ; <double> [#uses=1]
-  br label %phi1.exit
-
-phi1.exit:                                        ; preds = %bb.i35, %bb142
-  %.pn = phi double [ %6, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=0]
-  %9 = phi double [ %8, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=1]
-  %10 = fmul double undef, %9                     ; <double> [#uses=0]
-  br i1 %14, label %phi0.exit, label %bb.i
-
-bb.i:                                             ; preds = %phi1.exit
-  unreachable
-
-phi0.exit:                                        ; preds = %phi1.exit
-  %11 = fsub double %4, undef                     ; <double> [#uses=1]
-  %12 = fadd double 0.000000e+00, %11             ; <double> [#uses=1]
-  store double %12, double* undef, align 4
-  br label %bb142
-
-bb145.loopexit:                                   ; preds = %bb138
-  br i1 undef, label %bb.nph218.bb.nph218.split_crit_edge, label %bb159
-
-bb.nph218.bb.nph218.split_crit_edge:              ; preds = %bb145.loopexit
-  %13 = fmul double %0, 0x401921FB54442D18        ; <double> [#uses=1]
-  %14 = fcmp ugt double %0, 6.000000e-01          ; <i1> [#uses=2]
-  %15 = fdiv double %13, 6.000000e-01             ; <double> [#uses=1]
-  br label %bb142
-
-bb159:                                            ; preds = %bb145.loopexit, %smvp.exit, %bb134
-  unreachable
-
-bb166:                                            ; preds = %bb127
-  unreachable
-}
-
-declare arm_apcscc double @sin(double) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll b/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
index 9565c8b..92c1cf1 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/remat.ll
@@ -1,119 +1,65 @@
-; RUN: llc < %s -mtriple=arm-apple-darwin 
-; RUN: llc < %s -mtriple=arm-apple-darwin -stats -info-output-file - | grep "Number of re-materialization" | grep 3
+; RUN: llc < %s -march=arm -mattr=+v6,+vfp2 -stats -info-output-file - | grep "Number of re-materialization"
 
-	%struct.CONTENTBOX = type { i32, i32, i32, i32, i32 }
-	%struct.LOCBOX = type { i32, i32, i32, i32 }
-	%struct.SIDEBOX = type { i32, i32 }
-	%struct.UNCOMBOX = type { i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32 }
-	%struct.cellbox = type { i8*, i32, i32, i32, [9 x i32], i32, i32, i32, i32, i32, i32, i32, double, double, double, double, double, i32, i32, %struct.CONTENTBOX*, %struct.UNCOMBOX*, [8 x %struct.tilebox*], %struct.SIDEBOX* }
-	%struct.termbox = type { %struct.termbox*, i32, i32, i32, i32, i32 }
-	%struct.tilebox = type { %struct.tilebox*, double, double, double, double, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, %struct.termbox*, %struct.LOCBOX* }
- at numcells = external global i32		; <i32*> [#uses=1]
- at cellarray = external global %struct.cellbox**		; <%struct.cellbox***> [#uses=1]
- at numBinsY = external global i32		; <i32*> [#uses=1]
-
-define fastcc void @fixpenal() {
+define arm_apcscc i32 @main(i32 %argc, i8** nocapture %argv, double %d1, double %d2) nounwind {
 entry:
-	%tmp491 = load i32* @numcells, align 4		; <i32> [#uses=1]
-	%tmp9 = load %struct.cellbox*** @cellarray, align 4		; <%struct.cellbox**> [#uses=1]
-	%tmp77.i = load i32* @numBinsY, align 4		; <i32> [#uses=2]
-	br label %bb490
-
-bb8:		; preds = %bb490, %cond_false428
-  %foo3 = phi i1 [ 0, %bb490 ], [ 1, %cond_false428 ]
-	br i1 %foo3, label %cond_false58.i, label %cond_false.i
-
-cond_false.i:		; preds = %bb8
-	ret void
-
-cond_false58.i:		; preds = %bb8
-	%highBinX.0.i = select i1 false, i32 1, i32 0		; <i32> [#uses=2]
-	br i1 %foo3, label %cond_next85.i, label %cond_false76.i
-
-cond_false76.i:		; preds = %cond_false58.i
-	ret void
-
-cond_next85.i:		; preds = %cond_false58.i
-	br i1 %foo3, label %cond_next105.i, label %cond_false98.i
-
-cond_false98.i:		; preds = %cond_next85.i
-	ret void
-
-cond_next105.i:		; preds = %cond_next85.i
-	%tmp108.i = icmp eq i32 1, %highBinX.0.i		; <i1> [#uses=1]
-	%tmp115.i = icmp eq i32 1, %tmp77.i		; <i1> [#uses=1]
-	%bothcond.i = and i1 %tmp115.i, %tmp108.i		; <i1> [#uses=1]
-	%storemerge.i = select i1 %bothcond.i, i32 1, i32 0		; <i32> [#uses=2]
-	br i1 %bothcond.i, label %whoOverlaps.exit, label %bb503.preheader.i
-
-bb503.preheader.i:		; preds = %bb513.i, %cond_next105.i
-	%i.022.0.i = phi i32 [ %tmp512.i, %bb513.i ], [ 0, %cond_next105.i ]		; <i32> [#uses=2]
-	%tmp165.i = getelementptr i32*** null, i32 %i.022.0.i		; <i32***> [#uses=0]
-	br label %bb503.i
-
-bb137.i:		; preds = %bb503.i
-	br i1 %tmp506.i, label %bb162.i, label %bb148.i
-
-bb148.i:		; preds = %bb137.i
-	ret void
-
-bb162.i:		; preds = %bb137.i
-	%tmp49435.i = load i32* null		; <i32> [#uses=1]
-	br label %bb170.i
-
-bb170.i:		; preds = %bb491.i, %bb162.i
-	%indvar.i = phi i32 [ %k.032.0.i, %bb491.i ], [ 0, %bb162.i ]		; <i32> [#uses=2]
-	%k.032.0.i = add i32 %indvar.i, 1		; <i32> [#uses=2]
-	%tmp173.i = getelementptr i32* null, i32 %k.032.0.i		; <i32*> [#uses=1]
-	%tmp174.i = load i32* %tmp173.i		; <i32> [#uses=4]
-	%tmp177.i = icmp eq i32 %tmp174.i, %cell.1		; <i1> [#uses=1]
-	%tmp184.i = icmp sgt i32 %tmp174.i, %tmp491		; <i1> [#uses=1]
-	%bothcond = or i1 %tmp177.i, %tmp184.i		; <i1> [#uses=1]
-	br i1 %bothcond, label %bb491.i, label %cond_next188.i
-
-cond_next188.i:		; preds = %bb170.i
-	%tmp191.i = getelementptr %struct.cellbox** %tmp9, i32 %tmp174.i		; <%struct.cellbox**> [#uses=1]
-	%tmp192.i = load %struct.cellbox** %tmp191.i		; <%struct.cellbox*> [#uses=1]
-	%tmp195.i = icmp eq i32 %tmp174.i, 0		; <i1> [#uses=1]
-	br i1 %tmp195.i, label %bb491.i, label %cond_true198.i
-
-cond_true198.i:		; preds = %cond_next188.i
-	%tmp210.i = getelementptr %struct.cellbox* %tmp192.i, i32 0, i32 3		; <i32*> [#uses=0]
-	ret void
-
-bb491.i:		; preds = %cond_next188.i, %bb170.i
-	%tmp490.i = add i32 %indvar.i, 2		; <i32> [#uses=1]
-	%tmp496.i = icmp slt i32 %tmp49435.i, %tmp490.i		; <i1> [#uses=1]
-	br i1 %tmp496.i, label %bb500.i, label %bb170.i
-
-bb500.i:		; preds = %bb491.i
-	%indvar.next82.i = add i32 %j.0.i, 1		; <i32> [#uses=1]
-	br label %bb503.i
-
-bb503.i:		; preds = %bb500.i, %bb503.preheader.i
-	%j.0.i = phi i32 [ 0, %bb503.preheader.i ], [ %indvar.next82.i, %bb500.i ]		; <i32> [#uses=2]
-	%tmp506.i = icmp sgt i32 %j.0.i, %tmp77.i		; <i1> [#uses=1]
-	br i1 %tmp506.i, label %bb513.i, label %bb137.i
-
-bb513.i:		; preds = %bb503.i
-	%tmp512.i = add i32 %i.022.0.i, 1		; <i32> [#uses=2]
-	%tmp516.i = icmp sgt i32 %tmp512.i, %highBinX.0.i		; <i1> [#uses=1]
-	br i1 %tmp516.i, label %whoOverlaps.exit, label %bb503.preheader.i
-
-whoOverlaps.exit:		; preds = %bb513.i, %cond_next105.i
-  %foo = phi i1 [ 1, %bb513.i], [0, %cond_next105.i]
-	br i1 %foo, label %cond_false428, label %bb490
-
-cond_false428:		; preds = %whoOverlaps.exit
-	br i1 %foo, label %bb497, label %bb8
-
-bb490:		; preds = %whoOverlaps.exit, %entry
-	%binY.tmp.2 = phi i32 [ 0, %entry ], [ %storemerge.i, %whoOverlaps.exit ]		; <i32> [#uses=1]
-	%cell.1 = phi i32 [ 1, %entry ], [ 0, %whoOverlaps.exit ]		; <i32> [#uses=1]
-	%foo2 = phi i1 [ 1, %entry], [0, %whoOverlaps.exit]
-	br i1 %foo2, label %bb497, label %bb8
-
-bb497:		; preds = %bb490, %cond_false428
-	%binY.tmp.3 = phi i32 [ %binY.tmp.2, %bb490 ], [ %storemerge.i, %cond_false428 ]		; <i32> [#uses=0]
-	ret void
+  br i1 undef, label %smvp.exit, label %bb.i3
+
+bb.i3:                                            ; preds = %bb.i3, %bb134
+  br i1 undef, label %smvp.exit, label %bb.i3
+
+smvp.exit:                                        ; preds = %bb.i3
+  %0 = fmul double %d1, 2.400000e-03            ; <double> [#uses=2]
+  br i1 undef, label %bb138.preheader, label %bb159
+
+bb138.preheader:                                  ; preds = %smvp.exit
+  br label %bb138
+
+bb138:                                            ; preds = %bb138, %bb138.preheader
+  br i1 undef, label %bb138, label %bb145.loopexit
+
+bb142:                                            ; preds = %bb.nph218.bb.nph218.split_crit_edge, %phi0.exit
+  %1 = fmul double %d1, -1.200000e-03           ; <double> [#uses=1]
+  %2 = fadd double %d2, %1                      ; <double> [#uses=1]
+  %3 = fmul double %2, %d2                      ; <double> [#uses=1]
+  %4 = fsub double 0.000000e+00, %3               ; <double> [#uses=1]
+  br i1 %14, label %phi1.exit, label %bb.i35
+
+bb.i35:                                           ; preds = %bb142
+  %5 = call arm_apcscc  double @sin(double %15) nounwind readonly ; <double> [#uses=1]
+  %6 = fmul double %5, 0x4031740AFA84AD8A         ; <double> [#uses=1]
+  %7 = fsub double 1.000000e+00, undef            ; <double> [#uses=1]
+  %8 = fdiv double %7, 6.000000e-01               ; <double> [#uses=1]
+  br label %phi1.exit
+
+phi1.exit:                                        ; preds = %bb.i35, %bb142
+  %.pn = phi double [ %6, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=1]
+  %9 = phi double [ %8, %bb.i35 ], [ 0.000000e+00, %bb142 ] ; <double> [#uses=1]
+  %10 = fmul double %.pn, %9                      ; <double> [#uses=1]
+  br i1 %14, label %phi0.exit, label %bb.i
+
+bb.i:                                             ; preds = %phi1.exit
+  unreachable
+
+phi0.exit:                                        ; preds = %phi1.exit
+  %11 = fsub double %4, %10                       ; <double> [#uses=1]
+  %12 = fadd double 0.000000e+00, %11             ; <double> [#uses=1]
+  store double %12, double* undef, align 4
+  br label %bb142
+
+bb145.loopexit:                                   ; preds = %bb138
+  br i1 undef, label %bb.nph218.bb.nph218.split_crit_edge, label %bb159
+
+bb.nph218.bb.nph218.split_crit_edge:              ; preds = %bb145.loopexit
+  %13 = fmul double %0, 0x401921FB54442D18        ; <double> [#uses=1]
+  %14 = fcmp ugt double %0, 6.000000e-01          ; <i1> [#uses=2]
+  %15 = fdiv double %13, 6.000000e-01             ; <double> [#uses=1]
+  br label %bb142
+
+bb159:                                            ; preds = %bb145.loopexit, %smvp.exit, %bb134
+  unreachable
+
+bb166:                                            ; preds = %bb127
+  unreachable
 }
+
+declare arm_apcscc double @sin(double) nounwind readonly
diff --git a/libclamav/c++/llvm/test/CodeGen/ARM/unaligned_load_store.ll b/libclamav/c++/llvm/test/CodeGen/ARM/unaligned_load_store.ll
index fcaa2b3..a4494f3 100644
--- a/libclamav/c++/llvm/test/CodeGen/ARM/unaligned_load_store.ll
+++ b/libclamav/c++/llvm/test/CodeGen/ARM/unaligned_load_store.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -march=arm | FileCheck %s -check-prefix=GENERIC
 ; RUN: llc < %s -mtriple=armv6-apple-darwin | FileCheck %s -check-prefix=DARWIN_V6
-; RUN: llc < %s -march=arm -mattr=+v7a | FileCheck %s -check-prefix=V7
+; RUN: llc < %s -mtriple=armv6-linux | FileCheck %s -check-prefix=GENERIC
 
 ; rdar://7113725
 
@@ -20,9 +20,6 @@ entry:
 ; DARWIN_V6: ldr r1
 ; DARWIN_V6: str r1
 
-; V7: t:
-; V7: ldr r1
-; V7: str r1
   %__src1.i = bitcast i8* %b to i32*              ; <i32*> [#uses=1]
   %__dest2.i = bitcast i8* %a to i32*             ; <i32*> [#uses=1]
   %tmp.i = load i32* %__src1.i, align 1           ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2006-04-11-vecload.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2006-04-11-vecload.ll
deleted file mode 100644
index a68ed83..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2006-04-11-vecload.ll
+++ /dev/null
@@ -1,12 +0,0 @@
-; RUN: llc < %s -march=x86 -mcpu=yonah
-
-; The vload was getting memoized to the previous scalar load!
-
-define void @VertexProgram2() {
-        %xFloat0.688 = load float* null         ; <float> [#uses=0]
-        %loadVector37.712 = load <4 x float>* null              ; <<4 x float>> [#uses=1]
-        %inFloat3.713 = insertelement <4 x float> %loadVector37.712, float 0.000000e+00, i32 3          ; <<4 x float>> [#uses=1]
-        store <4 x float> %inFloat3.713, <4 x float>* null
-        unreachable
-}
-
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2006-11-06-MemIntrinsicExpand.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2006-11-06-MemIntrinsicExpand.ll
deleted file mode 100644
index ad3e49f..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2006-11-06-MemIntrinsicExpand.ll
+++ /dev/null
@@ -1,11 +0,0 @@
-; RUN: llc < %s -march=x86 | not grep adc
-; PR987
-
-declare void @llvm.memcpy.i64(i8*, i8*, i64, i32)
-
-define void @foo(i64 %a) {
-        %b = add i64 %a, 1              ; <i64> [#uses=1]
-        call void @llvm.memcpy.i64( i8* null, i8* null, i64 %b, i32 1 )
-        ret void
-}
-
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-14-BitTestsBadMask.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-14-BitTestsBadMask.ll
deleted file mode 100644
index 00337b9..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-14-BitTestsBadMask.ll
+++ /dev/null
@@ -1,160 +0,0 @@
-; RUN: llc < %s -march=x86 | grep 8388635
-; RUN: llc < %s -march=x86-64 | grep 4294981120
-; PR 1325
-
-; ModuleID = 'bugpoint.test.bc'
-target datalayout = "E-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64"
-target triple = "powerpc-apple-darwin8.8.0"
-;target triple = "i686-linux-gnu"
-	%struct.FILE = type { i8*, i32, i32, i16, i16, %struct.__sbuf, i32, i8*, i32 (i8*)*, i32 (i8*, i8*, i32)*, i64 (i8*, i64, i32)*, i32 (i8*, i8*, i32)*, %struct.__sbuf, %struct.__sFILEX*, i32, [3 x i8], [1 x i8], %struct.__sbuf, i32, i64 }
-	%struct.__sFILEX = type opaque
-	%struct.__sbuf = type { i8*, i32 }
- at PL_rsfp = external global %struct.FILE*		; <%struct.FILE**> [#uses=1]
- at PL_bufend = external global i8*		; <i8**> [#uses=1]
- at PL_in_eval = external global i32		; <i32*> [#uses=1]
-
-declare fastcc void @incline(i8*)
-
-define i16 @Perl_skipspace_bb60(i8* %s, i8** %s_addr.4.out) {
-newFuncRoot:
-	%tmp138.loc = alloca i8*		; <i8**> [#uses=2]
-	%s_addr.4.loc = alloca i8*		; <i8**> [#uses=2]
-	%tmp274.loc = alloca i8*		; <i8**> [#uses=2]
-	br label %bb60
-
-cond_next154.UnifiedReturnBlock_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 0
-
-cond_next161.UnifiedReturnBlock_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 1
-
-cond_next167.UnifiedReturnBlock_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 2
-
-cond_false29.i.cond_true190_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 3
-
-cond_next.i.cond_true190_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 4
-
-cond_true19.i.cond_true190_crit_edge.exitStub:		; preds = %codeRepl
-	store i8* %s_addr.4.reload, i8** %s_addr.4.out
-	ret i16 5
-
-bb60:		; preds = %bb60.backedge, %newFuncRoot
-	%s_addr.2 = phi i8* [ %s, %newFuncRoot ], [ %s_addr.2.be, %bb60.backedge ]		; <i8*> [#uses=3]
-	%tmp61 = load i8** @PL_bufend		; <i8*> [#uses=1]
-	%tmp63 = icmp ult i8* %s_addr.2, %tmp61		; <i1> [#uses=1]
-	br i1 %tmp63, label %bb60.cond_next67_crit_edge, label %bb60.bb101_crit_edge
-
-bb37:		; preds = %cond_next67.bb37_crit_edge5, %cond_next67.bb37_crit_edge4, %cond_next67.bb37_crit_edge3, %cond_next67.bb37_crit_edge2, %cond_next67.bb37_crit_edge
-	%tmp40 = icmp eq i8 %tmp69, 10		; <i1> [#uses=1]
-	%tmp43 = getelementptr i8* %s_addr.27.2, i32 1		; <i8*> [#uses=5]
-	br i1 %tmp40, label %cond_true45, label %bb37.bb60_crit_edge
-
-cond_true45:		; preds = %bb37
-	%tmp46 = volatile load i32* @PL_in_eval		; <i32> [#uses=1]
-	%tmp47 = icmp eq i32 %tmp46, 0		; <i1> [#uses=1]
-	br i1 %tmp47, label %cond_true45.bb60_crit_edge, label %cond_true50
-
-cond_true50:		; preds = %cond_true45
-	%tmp51 = volatile load %struct.FILE** @PL_rsfp		; <%struct.FILE*> [#uses=1]
-	%tmp52 = icmp eq %struct.FILE* %tmp51, null		; <i1> [#uses=1]
-	br i1 %tmp52, label %cond_true55, label %cond_true50.bb60_crit_edge
-
-cond_true55:		; preds = %cond_true50
-	tail call fastcc void @incline( i8* %tmp43 )
-	br label %bb60.backedge
-
-cond_next67:		; preds = %Perl_newSV.exit.cond_next67_crit_edge, %cond_true148.cond_next67_crit_edge, %bb60.cond_next67_crit_edge
-	%s_addr.27.2 = phi i8* [ %s_addr.2, %bb60.cond_next67_crit_edge ], [ %tmp274.reload, %Perl_newSV.exit.cond_next67_crit_edge ], [ %tmp138.reload, %cond_true148.cond_next67_crit_edge ]		; <i8*> [#uses=3]
-	%tmp69 = load i8* %s_addr.27.2		; <i8> [#uses=2]
-	switch i8 %tmp69, label %cond_next67.bb101_crit_edge [
-		 i8 32, label %cond_next67.bb37_crit_edge
-		 i8 9, label %cond_next67.bb37_crit_edge2
-		 i8 10, label %cond_next67.bb37_crit_edge3
-		 i8 13, label %cond_next67.bb37_crit_edge4
-		 i8 12, label %cond_next67.bb37_crit_edge5
-	]
-
-codeRepl:		; preds = %bb101.preheader
-	%targetBlock = call i16 @Perl_skipspace_bb60_bb101( i8* %s_addr.27.3.ph, i8** %tmp274.loc, i8** %s_addr.4.loc, i8** %tmp138.loc )		; <i16> [#uses=1]
-	%tmp274.reload = load i8** %tmp274.loc		; <i8*> [#uses=4]
-	%s_addr.4.reload = load i8** %s_addr.4.loc		; <i8*> [#uses=6]
-	%tmp138.reload = load i8** %tmp138.loc		; <i8*> [#uses=1]
-	switch i16 %targetBlock, label %cond_true19.i.cond_true190_crit_edge.exitStub [
-		 i16 0, label %cond_next271.bb60_crit_edge
-		 i16 1, label %cond_true290.bb60_crit_edge
-		 i16 2, label %cond_true295.bb60_crit_edge
-		 i16 3, label %Perl_newSV.exit.cond_next67_crit_edge
-		 i16 4, label %cond_true148.cond_next67_crit_edge
-		 i16 5, label %cond_next154.UnifiedReturnBlock_crit_edge.exitStub
-		 i16 6, label %cond_next161.UnifiedReturnBlock_crit_edge.exitStub
-		 i16 7, label %cond_next167.UnifiedReturnBlock_crit_edge.exitStub
-		 i16 8, label %cond_false29.i.cond_true190_crit_edge.exitStub
-		 i16 9, label %cond_next.i.cond_true190_crit_edge.exitStub
-	]
-
-bb37.bb60_crit_edge:		; preds = %bb37
-	br label %bb60.backedge
-
-cond_true45.bb60_crit_edge:		; preds = %cond_true45
-	br label %bb60.backedge
-
-cond_true50.bb60_crit_edge:		; preds = %cond_true50
-	br label %bb60.backedge
-
-bb60.cond_next67_crit_edge:		; preds = %bb60
-	br label %cond_next67
-
-bb60.bb101_crit_edge:		; preds = %bb60
-	br label %bb101.preheader
-
-cond_next67.bb101_crit_edge:		; preds = %cond_next67
-	br label %bb101.preheader
-
-cond_next67.bb37_crit_edge:		; preds = %cond_next67
-	br label %bb37
-
-cond_next67.bb37_crit_edge2:		; preds = %cond_next67
-	br label %bb37
-
-cond_next67.bb37_crit_edge3:		; preds = %cond_next67
-	br label %bb37
-
-cond_next67.bb37_crit_edge4:		; preds = %cond_next67
-	br label %bb37
-
-cond_next67.bb37_crit_edge5:		; preds = %cond_next67
-	br label %bb37
-
-cond_true148.cond_next67_crit_edge:		; preds = %codeRepl
-	br label %cond_next67
-
-cond_next271.bb60_crit_edge:		; preds = %codeRepl
-	br label %bb60.backedge
-
-cond_true290.bb60_crit_edge:		; preds = %codeRepl
-	br label %bb60.backedge
-
-cond_true295.bb60_crit_edge:		; preds = %codeRepl
-	br label %bb60.backedge
-
-Perl_newSV.exit.cond_next67_crit_edge:		; preds = %codeRepl
-	br label %cond_next67
-
-bb101.preheader:		; preds = %cond_next67.bb101_crit_edge, %bb60.bb101_crit_edge
-	%s_addr.27.3.ph = phi i8* [ %s_addr.27.2, %cond_next67.bb101_crit_edge ], [ %s_addr.2, %bb60.bb101_crit_edge ]		; <i8*> [#uses=1]
-	br label %codeRepl
-
-bb60.backedge:		; preds = %cond_true295.bb60_crit_edge, %cond_true290.bb60_crit_edge, %cond_next271.bb60_crit_edge, %cond_true50.bb60_crit_edge, %cond_true45.bb60_crit_edge, %bb37.bb60_crit_edge, %cond_true55
-	%s_addr.2.be = phi i8* [ %tmp43, %cond_true55 ], [ %tmp43, %bb37.bb60_crit_edge ], [ %tmp43, %cond_true45.bb60_crit_edge ], [ %tmp43, %cond_true50.bb60_crit_edge ], [ %tmp274.reload, %cond_next271.bb60_crit_edge ], [ %tmp274.reload, %cond_true290.bb60_crit_edge ], [ %tmp274.reload, %cond_true295.bb60_crit_edge ]		; <i8*> [#uses=1]
-	br label %bb60
-}
-
-declare i16 @Perl_skipspace_bb60_bb101(i8*, i8**, i8**, i8**)
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-27-BitTestsBadMask.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-27-BitTestsBadMask.ll
deleted file mode 100644
index 3e8857f..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2007-04-27-BitTestsBadMask.ll
+++ /dev/null
@@ -1,18 +0,0 @@
-; RUN: llc < %s -march=x86 | grep je | count 3
-; RUN: llc < %s -march=x86-64 | grep 4297064449
-; PR 1325+
-
-define i32 @foo(i8 %bar) {
-entry:
-	switch i8 %bar, label %bb1203 [
-		 i8 117, label %bb1204
-		 i8 85, label %bb1204
-		 i8 106, label %bb1204
-	]
-
-bb1203:		; preds = %entry
-	ret i32 1
-
-bb1204:		; preds = %entry, %entry, %entry
-	ret i32 2
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2007-05-03-EHTypeInfo.ll b/libclamav/c++/llvm/test/CodeGen/Generic/2007-05-03-EHTypeInfo.ll
index 533aa4a..bb774b4 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/2007-05-03-EHTypeInfo.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/2007-05-03-EHTypeInfo.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -enable-eh -march=x86
+; RUN: llc < %s -enable-eh
 
 	%struct.exception = type { i8, i8, i32, i8*, i8*, i32, i8* }
 @program_error = external global %struct.exception		; <%struct.exception*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/addc-fold2.ll b/libclamav/c++/llvm/test/CodeGen/Generic/addc-fold2.ll
deleted file mode 100644
index 34f5ac1..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/addc-fold2.ll
+++ /dev/null
@@ -1,10 +0,0 @@
-; RUN: llc < %s -march=x86 | grep add
-; RUN: llc < %s -march=x86 | not grep adc
-
-define i64 @test(i64 %A, i32 %B) {
-        %tmp12 = zext i32 %B to i64             ; <i64> [#uses=1]
-        %tmp3 = shl i64 %tmp12, 32              ; <i64> [#uses=1]
-        %tmp5 = add i64 %tmp3, %A               ; <i64> [#uses=1]
-        ret i64 %tmp5
-}
-
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/fpowi-promote.ll b/libclamav/c++/llvm/test/CodeGen/Generic/fpowi-promote.ll
index 82628ef..8dacebe 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/fpowi-promote.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/fpowi-promote.ll
@@ -1,5 +1,4 @@
 ; RUN: llc < %s
-; RUN: llc < %s -march=x86 -mcpu=i386
 
 ; PR1239
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll
deleted file mode 100644
index 80e0618..0000000
--- a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature-2.ll
+++ /dev/null
@@ -1,50 +0,0 @@
-; RUN: llc < %s -march=x86 -o %t
-; RUN: grep jb %t | count 1
-; RUN: grep \\\$6 %t | count 2
-; RUN: grep 1024 %t | count 1
-; RUN: grep 1023 %t | count 1
-; RUN: grep 119  %t | count 1
-; RUN: grep JTI %t | count 2
-; RUN: grep jg %t | count 3
-; RUN: grep ja %t | count 1
-; RUN: grep jns %t | count 1
-
-target triple = "i686-pc-linux-gnu"
-
-define i32 @main(i32 %tmp158) {
-entry:
-        switch i32 %tmp158, label %bb336 [
-	         i32 -2147483648, label %bb338
-		 i32 -2147483647, label %bb338
-		 i32 -2147483646, label %bb338
-	         i32 120, label %bb338
-	         i32 121, label %bb339
-                 i32 122, label %bb340
-                 i32 123, label %bb341
-                 i32 124, label %bb342
-                 i32 125, label %bb343
-                 i32 126, label %bb336
-		 i32 1024, label %bb338
-                 i32 0, label %bb338
-                 i32 1, label %bb338
-                 i32 2, label %bb338
-                 i32 3, label %bb338
-                 i32 4, label %bb338
-		 i32 5, label %bb338
-        ]
-bb336:
-  ret i32 10
-bb338:
-  ret i32 11
-bb339:
-  ret i32 12
-bb340:
-  ret i32 13
-bb341:
-  ret i32 14
-bb342:
-  ret i32 15
-bb343:
-  ret i32 18
-
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature.ll b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature.ll
index 65fdf5a..1e9dbee 100644
--- a/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Generic/switch-lower-feature.ll
@@ -1,10 +1,6 @@
-; RUN: llc < %s -march=x86 -o - | grep {\$7} | count 1
-; RUN: llc < %s -march=x86 -o - | grep {\$6} | count 1
-; RUN: llc < %s -march=x86 -o - | grep 1024 | count 1
-; RUN: llc < %s -march=x86 -o - | grep jb | count 2
-; RUN: llc < %s -march=x86 -o - | grep je | count 1
+; RUN: llc < %s
 
-define i32 @main(i32 %tmp158) {
+define i32 @test(i32 %tmp158) {
 entry:
         switch i32 %tmp158, label %bb336 [
 	         i32 120, label %bb338
@@ -27,3 +23,41 @@ bb336:
 bb338:
   ret i32 11
 }
+
+define i32 @test2(i32 %tmp158) {
+entry:
+        switch i32 %tmp158, label %bb336 [
+	         i32 -2147483648, label %bb338
+		 i32 -2147483647, label %bb338
+		 i32 -2147483646, label %bb338
+	         i32 120, label %bb338
+	         i32 121, label %bb339
+                 i32 122, label %bb340
+                 i32 123, label %bb341
+                 i32 124, label %bb342
+                 i32 125, label %bb343
+                 i32 126, label %bb336
+		 i32 1024, label %bb338
+                 i32 0, label %bb338
+                 i32 1, label %bb338
+                 i32 2, label %bb338
+                 i32 3, label %bb338
+                 i32 4, label %bb338
+		 i32 5, label %bb338
+        ]
+bb336:
+  ret i32 10
+bb338:
+  ret i32 11
+bb339:
+  ret i32 12
+bb340:
+  ret i32 13
+bb341:
+  ret i32 14
+bb342:
+  ret i32 15
+bb343:
+  ret i32 18
+
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
index c4ed166..d1d28ae 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2007-04-30-InlineAsmEarlyClobber.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s | grep {subfc r3,r5,r4}
 ; RUN: llc < %s | grep {subfze r4,r2}
-; RUN: llc < %s -regalloc=local | grep {subfc r5,r4,r3}
-; RUN: llc < %s -regalloc=local | grep {subfze r2,r2}
+; RUN: llc < %s -regalloc=local | grep {subfc r2,r5,r4}
+; RUN: llc < %s -regalloc=local | grep {subfze r3,r3}
 ; The first argument of subfc must not be the same as any other register.
 
 ; PR1357
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2008-01-25-EmptyFunction.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2008-01-25-EmptyFunction.ll
index db2ab87..a05245d 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/2008-01-25-EmptyFunction.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2008-01-25-EmptyFunction.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=ppc32 | grep nop
+; RUN: llc < %s -march=ppc32 | grep .byte
 target triple = "powerpc-apple-darwin8"
 
 
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-04-EmptyGlobal.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-04-EmptyGlobal.ll
new file mode 100644
index 0000000..32ddb34
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-04-EmptyGlobal.ll
@@ -0,0 +1,11 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin10 -relocation-model=pic -disable-fp-elim | FileCheck %s
+; <rdar://problem/7604010>
+
+%cmd.type = type { }
+
+ at _cmd = constant %cmd.type zeroinitializer
+
+; CHECK:      .globl __cmd
+; CHECK-NEXT: .align 3
+; CHECK-NEXT: __cmd:
+; CHECK-NEXT: .space 1
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-12-saveCR.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-12-saveCR.ll
new file mode 100644
index 0000000..b73382e
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/2010-02-12-saveCR.ll
@@ -0,0 +1,30 @@
+; RUN: llc < %s -mtriple=powerpc-apple-darwin | FileCheck %s
+; ModuleID = 'hh.c'
+target datalayout = "E-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f128:64:128-n32"
+target triple = "powerpc-apple-darwin9.6"
+; This formerly used R0 for both the stack address and CR.
+
+define void @foo() nounwind {
+entry:
+;CHECK:  mfcr r2
+;CHECK:  rlwinm r2, r2, 8, 0, 31
+;CHECK:  lis r0, 1
+;CHECK:  ori r0, r0, 34540
+;CHECK:  stwx r2, r1, r0
+  %x = alloca [100000 x i8]                       ; <[100000 x i8]*> [#uses=1]
+  %"alloca point" = bitcast i32 0 to i32          ; <i32> [#uses=0]
+  %x1 = bitcast [100000 x i8]* %x to i8*          ; <i8*> [#uses=1]
+  call void @bar(i8* %x1) nounwind
+  call void asm sideeffect "", "~{cr2}"() nounwind
+  br label %return
+
+return:                                           ; preds = %entry
+;CHECK:  lis r0, 1
+;CHECK:  ori r0, r0, 34540
+;CHECK:  lwzx r2, r1, r0
+;CHECK:  rlwinm r2, r2, 24, 0, 31
+;CHECK:  mtcrf 32, r2
+  ret void
+}
+
+declare void @bar(i8*)
diff --git a/libclamav/c++/llvm/test/CodeGen/PowerPC/align.ll b/libclamav/c++/llvm/test/CodeGen/PowerPC/align.ll
index 2e9b4ec..109a837 100644
--- a/libclamav/c++/llvm/test/CodeGen/PowerPC/align.ll
+++ b/libclamav/c++/llvm/test/CodeGen/PowerPC/align.ll
@@ -1,11 +1,42 @@
-; RUN: llc < %s -mtriple=powerpc-apple-darwin9 | \
-; RUN:   grep align.4 | count 1
-; RUN: llc < %s -mtriple=powerpc-apple-darwin9 | \
-; RUN:   grep align.2 | count 1
-; RUN: llc < %s -mtriple=powerpc-apple-darwin9 | \
-; RUN:   grep align.3 | count 1
+; RUN: llc < %s -mtriple=powerpc-linux-gnu | FileCheck %s -check-prefix=ELF
+; RUN: llc < %s -mtriple=powerpc-apple-darwin9 | FileCheck %s -check-prefix=DARWIN
 
- at A = global <4 x i32> < i32 10, i32 20, i32 30, i32 40 >                ; <<4 x i32>*> [#uses=0]
- at B = global float 1.000000e+02          ; <float*> [#uses=0]
- at C = global double 2.000000e+03         ; <double*> [#uses=0]
+ at a = global i1 true
+; no alignment
 
+ at b = global i8 1
+; no alignment
+
+ at c = global i16 2
+;ELF: .align 1
+;ELF: c:
+;DARWIN: .align 1
+;DARWIN: _c:
+
+ at d = global i32 3
+;ELF: .align 2
+;ELF: d:
+;DARWIN: .align 2
+;DARWIN: _d:
+
+ at e = global i64 4
+;ELF: .align 3
+;ELF: e
+;DARWIN: .align 3
+;DARWIN: _e:
+
+ at f = global float 5.0
+;ELF: .align 2
+;ELF: f:
+;DARWIN: .align 2
+;DARWIN: _f:
+
+ at g = global double 6.0
+;ELF: .align 3
+;ELF: g:
+;DARWIN: .align 3
+;DARWIN: _g:
+
+ at bar = common global [75 x i8] zeroinitializer, align 128
+;ELF: .comm bar,75,128
+;DARWIN: .comm _bar,75,7
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/2010-02-11-phi-cycle.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/2010-02-11-phi-cycle.ll
new file mode 100644
index 0000000..0f23ee7
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/2010-02-11-phi-cycle.ll
@@ -0,0 +1,72 @@
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin | FileCheck %s
+target datalayout = "e-p:32:32:32-i1:8:32-i8:8:32-i16:16:32-i32:32:32-i64:32:32-f32:32:32-f64:32:32-v64:64:64-v128:128:128-a0:0:32-n32"
+
+define arm_apcscc i32 @test(i32 %n) nounwind {
+; CHECK: test:
+; CHECK-NOT: mov
+; CHECK: return
+entry:
+  %0 = icmp eq i32 %n, 1                          ; <i1> [#uses=1]
+  br i1 %0, label %return, label %bb.nph
+
+bb.nph:                                           ; preds = %entry
+  %tmp = add i32 %n, -1                           ; <i32> [#uses=1]
+  br label %bb
+
+bb:                                               ; preds = %bb.nph, %bb
+  %indvar = phi i32 [ 0, %bb.nph ], [ %indvar.next, %bb ] ; <i32> [#uses=1]
+  %u.05 = phi i64 [ undef, %bb.nph ], [ %ins, %bb ] ; <i64> [#uses=1]
+  %1 = tail call arm_apcscc  i32 @f() nounwind    ; <i32> [#uses=1]
+  %tmp4 = zext i32 %1 to i64                      ; <i64> [#uses=1]
+  %mask = and i64 %u.05, -4294967296              ; <i64> [#uses=1]
+  %ins = or i64 %tmp4, %mask                      ; <i64> [#uses=2]
+  tail call arm_apcscc  void @g(i64 %ins) nounwind
+  %indvar.next = add i32 %indvar, 1               ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %indvar.next, %tmp      ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb, %entry
+  ret i32 undef
+}
+
+define arm_apcscc i32 @test_dead_cycle(i32 %n) nounwind {
+; CHECK: test_dead_cycle:
+; CHECK: blx
+; CHECK-NOT: mov
+; CHECK: blx
+entry:
+  %0 = icmp eq i32 %n, 1                          ; <i1> [#uses=1]
+  br i1 %0, label %return, label %bb.nph
+
+bb.nph:                                           ; preds = %entry
+  %tmp = add i32 %n, -1                           ; <i32> [#uses=2]
+  br label %bb
+
+bb:                                               ; preds = %bb.nph, %bb2
+  %indvar = phi i32 [ 0, %bb.nph ], [ %indvar.next, %bb2 ] ; <i32> [#uses=2]
+  %u.17 = phi i64 [ undef, %bb.nph ], [ %u.0, %bb2 ] ; <i64> [#uses=2]
+  %tmp9 = sub i32 %tmp, %indvar                   ; <i32> [#uses=1]
+  %1 = icmp sgt i32 %tmp9, 1                      ; <i1> [#uses=1]
+  br i1 %1, label %bb1, label %bb2
+
+bb1:                                              ; preds = %bb
+  %2 = tail call arm_apcscc  i32 @f() nounwind    ; <i32> [#uses=1]
+  %tmp6 = zext i32 %2 to i64                      ; <i64> [#uses=1]
+  %mask = and i64 %u.17, -4294967296              ; <i64> [#uses=1]
+  %ins = or i64 %tmp6, %mask                      ; <i64> [#uses=1]
+  tail call arm_apcscc  void @g(i64 %ins) nounwind
+  br label %bb2
+
+bb2:                                              ; preds = %bb1, %bb
+  %u.0 = phi i64 [ %ins, %bb1 ], [ %u.17, %bb ]   ; <i64> [#uses=2]
+  %indvar.next = add i32 %indvar, 1               ; <i32> [#uses=2]
+  %exitcond = icmp eq i32 %indvar.next, %tmp      ; <i1> [#uses=1]
+  br i1 %exitcond, label %return, label %bb
+
+return:                                           ; preds = %bb2, %entry
+  ret i32 undef
+}
+
+declare arm_apcscc i32 @f()
+
+declare arm_apcscc void @g(i64)
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
index 8f6449e..2b20931 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/cross-rc-coalescing-2.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mcpu=cortex-a8 | grep vmov.f32 | count 7
+; RUN: llc < %s -mtriple=thumbv7-apple-darwin9 -mcpu=cortex-a8 | grep vmov.f32 | count 3
 
 define arm_apcscc void @fht(float* nocapture %fz, i16 signext %n) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll
index 7b1b57a..ac2cd34 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/lsr-deficiency.ll
@@ -1,25 +1,29 @@
 ; RUN: llc < %s -mtriple=thumbv7-apple-darwin10 -relocation-model=pic | FileCheck %s
 ; rdar://7387640
 
-; FIXME: We still need to rewrite array reference iv of stride -4 with loop
-; count iv of stride -1.
+; This now reduces to a single induction variable.
+
+; TODO: It still gets a GPR shuffle at the end of the loop
+; This is because something in instruction selection has decided
+; that comparing the pre-incremented value with zero is better
+; than comparing the post-incremented value with -4.
 
 @G = external global i32                          ; <i32*> [#uses=2]
 @array = external global i32*                     ; <i32**> [#uses=1]
 
 define arm_apcscc void @t() nounwind optsize {
 ; CHECK: t:
-; CHECK: mov.w r2, #4000
-; CHECK: movw r3, #1001
+; CHECK: mov.w r2, #1000
 entry:
   %.pre = load i32* @G, align 4                   ; <i32> [#uses=1]
   br label %bb
 
 bb:                                               ; preds = %bb, %entry
 ; CHECK: LBB1_1:
-; CHECK: subs r3, #1
-; CHECK: cmp r3, #0
-; CHECK: sub.w r2, r2, #4
+; CHECK: cmp r2, #0
+; CHECK: sub.w r9, r2, #1
+; CHECK: mov r2, r9
+
   %0 = phi i32 [ %.pre, %entry ], [ %3, %bb ]     ; <i32> [#uses=1]
   %indvar = phi i32 [ 0, %entry ], [ %indvar.next, %bb ] ; <i32> [#uses=2]
   %tmp5 = sub i32 1000, %indvar                   ; <i32> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt1.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt1.ll
index 71199ab..1d26756 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-ifcvt1.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -mtriple=thumbv7-apple-darwin | FileCheck %s
 
-define i32 @t1(i32 %a, i32 %b, i32 %c, i32 %d) {
+define i32 @t1(i32 %a, i32 %b, i32 %c, i32 %d) nounwind {
 ; CHECK: t1:
 ; CHECK: it ne
 ; CHECK: cmpne
@@ -20,12 +20,12 @@ cond_next:
 }
 
 ; FIXME: Check for # of unconditional branch after adding branch folding post ifcvt.
-define i32 @t2(i32 %a, i32 %b) {
+define i32 @t2(i32 %a, i32 %b) nounwind {
 entry:
 ; CHECK: t2:
-; CHECK: ite le
-; CHECK: suble
+; CHECK: ite gt
 ; CHECK: subgt
+; CHECK: suble
 	%tmp1434 = icmp eq i32 %a, %b		; <i1> [#uses=1]
 	br i1 %tmp1434, label %bb17, label %bb.outer
 
@@ -60,14 +60,14 @@ bb17:		; preds = %cond_false, %cond_true, %entry
 
 @x = external global i32*		; <i32**> [#uses=1]
 
-define void @foo(i32 %a) {
+define void @foo(i32 %a) nounwind {
 entry:
 	%tmp = load i32** @x		; <i32*> [#uses=1]
 	store i32 %a, i32* %tmp
 	ret void
 }
 
-define void @t3(i32 %a, i32 %b) {
+define void @t3(i32 %a, i32 %b) nounwind {
 entry:
 ; CHECK: t3:
 ; CHECK: it lt
diff --git a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
index 2b08789..ff178b4 100644
--- a/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
+++ b/libclamav/c++/llvm/test/CodeGen/Thumb2/thumb2-spill-q.ll
@@ -12,8 +12,8 @@ declare <4 x float> @llvm.arm.neon.vld1.v4f32(i8*) nounwind readonly
 define arm_apcscc void @aaa(%quuz* %this, i8* %block) {
 ; CHECK: aaa:
 ; CHECK: bic r4, r4, #15
-; CHECK: vst1.64 {{.*}}sp, :128
-; CHECK: vld1.64 {{.*}}sp, :128
+; CHECK: vst1.64 {{.*}}[{{.*}}, :128]
+; CHECK: vld1.64 {{.*}}[{{.*}}, :128]
 entry:
   %0 = call <4 x float> @llvm.arm.neon.vld1.v4f32(i8* undef) nounwind ; <<4 x float>> [#uses=1]
   store float 6.300000e+01, float* undef, align 4
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2006-05-11-InstrSched.ll b/libclamav/c++/llvm/test/CodeGen/X86/2006-05-11-InstrSched.ll
index bdbe713..56d6aa9 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2006-05-11-InstrSched.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2006-05-11-InstrSched.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -march=x86 -mattr=+sse2 -stats -realign-stack=0 |&\
-; RUN:     grep {asm-printer} | grep 31
+; RUN:     grep {asm-printer} | grep 34
 
 target datalayout = "e-p:32:32"
 define void @foo(i32* %mc, i32* %bp, i32* %ms, i32* %xmb, i32* %mpp, i32* %tpmm, i32* %ip, i32* %tpim, i32* %dpp, i32* %tpdm, i32* %bpi, i32 %M) nounwind {
@@ -40,7 +40,7 @@ cond_true:		; preds = %cond_true, %entry
 	%tmp137.upgrd.7 = bitcast i32* %tmp137 to <2 x i64>*		; <<2 x i64>*> [#uses=1]
 	store <2 x i64> %tmp131, <2 x i64>* %tmp137.upgrd.7
 	%tmp147 = add nsw i32 %tmp.10, 8		; <i32> [#uses=1]
-	%tmp.upgrd.8 = icmp slt i32 %tmp147, %M		; <i1> [#uses=1]
+	%tmp.upgrd.8 = icmp ne i32 %tmp147, %M		; <i1> [#uses=1]
 	%indvar.next = add i32 %indvar, 1		; <i32> [#uses=1]
 	br i1 %tmp.upgrd.8, label %cond_true, label %return
 
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2006-12-16-InlineAsmCrash.ll b/libclamav/c++/llvm/test/CodeGen/X86/2006-12-16-InlineAsmCrash.ll
similarity index 100%
rename from libclamav/c++/llvm/test/CodeGen/Generic/2006-12-16-InlineAsmCrash.ll
rename to libclamav/c++/llvm/test/CodeGen/X86/2006-12-16-InlineAsmCrash.ll
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/2007-02-23-DAGCombine-Miscompile.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-02-23-DAGCombine-Miscompile.ll
similarity index 100%
rename from libclamav/c++/llvm/test/CodeGen/Generic/2007-02-23-DAGCombine-Miscompile.ll
rename to libclamav/c++/llvm/test/CodeGen/X86/2007-02-23-DAGCombine-Miscompile.ll
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-03-15-GEP-Idx-Sink.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-03-15-GEP-Idx-Sink.ll
index 4cac9b4..e1f8901 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-03-15-GEP-Idx-Sink.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-03-15-GEP-Idx-Sink.ll
@@ -1,7 +1,7 @@
 ; RUN: llc < %s -march=x86 -mtriple=i686-darwin | \
 ; RUN:   grep push | count 3
 
-define void @foo(i8** %buf, i32 %size, i32 %col, i8* %p) {
+define void @foo(i8** %buf, i32 %size, i32 %col, i8* %p) nounwind {
 entry:
 	icmp sgt i32 %size, 0		; <i1>:0 [#uses=1]
 	br i1 %0, label %bb.preheader, label %return
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-10-05-3AddrConvert.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-10-05-3AddrConvert.ll
index 67323e8..2c2706d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-10-05-3AddrConvert.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-10-05-3AddrConvert.ll
@@ -36,7 +36,9 @@ bb.i6.i:		; preds = %bb.i6.i, %stepsystem.exit.i
 
 bb107.i.i:		; preds = %bb107.i.i, %bb.i6.i
 	%q_addr.0.i.i.in = phi %struct.bnode** [ null, %bb107.i.i ], [ %4, %bb.i6.i ]		; <%struct.bnode**> [#uses=1]
-	%q_addr.0.i.i = load %struct.bnode** %q_addr.0.i.i.in		; <%struct.bnode*> [#uses=0]
+	%q_addr.0.i.i = load %struct.bnode** %q_addr.0.i.i.in		; <%struct.bnode*> [#uses=1]
+	%q_addr.1 = getelementptr %struct.anon* %0, i32 0, i32 4, i32 1
+	store %struct.bnode* %q_addr.0.i.i, %struct.bnode** %q_addr.1, align 4
 	br label %bb107.i.i
 
 bb47.loopexit.i:		; preds = %bb32.i
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
index 721d4c9..8e315f4 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-LoadFolding-Bug.ll
@@ -35,7 +35,7 @@ cond_next36.i:		; preds = %cond_next.i
 bb.i28.i:		; preds = %bb.i28.i, %cond_next36.i
 ; CHECK: %bb.i28.i
 ; CHECK: addl $2
-; CHECK: addl $2
+; CHECK: addl $-2
 	%j.0.reg2mem.0.i16.i = phi i32 [ 0, %cond_next36.i ], [ %indvar.next39.i, %bb.i28.i ]		; <i32> [#uses=2]
 	%din_addr.1.reg2mem.0.i17.i = phi double [ 0.000000e+00, %cond_next36.i ], [ %tmp16.i25.i, %bb.i28.i ]		; <double> [#uses=1]
 	%tmp1.i18.i = fptosi double %din_addr.1.reg2mem.0.i17.i to i32		; <i32> [#uses=2]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-TestLoadFolding.ll b/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-TestLoadFolding.ll
deleted file mode 100644
index debb461..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/2007-11-30-TestLoadFolding.ll
+++ /dev/null
@@ -1,58 +0,0 @@
-; RUN: llc < %s -march=x86 -stats |& \
-; RUN:   grep {1 .*folded into instructions}
-; RUN: llc < %s -march=x86 | grep cmp | count 4
-
-	%struct.quad_struct = type { i32, i32, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct*, %struct.quad_struct* }
-
-define fastcc i32 @perimeter(%struct.quad_struct* %tree, i32 %size) {
-entry:
-	%tree.idx7.val = load %struct.quad_struct** null		; <%struct.quad_struct*> [#uses=1]
-	%tmp8.i51 = icmp eq %struct.quad_struct* %tree.idx7.val, null		; <i1> [#uses=2]
-	br i1 %tmp8.i51, label %cond_next, label %cond_next.i52
-
-cond_next.i52:		; preds = %entry
-	ret i32 0
-
-cond_next:		; preds = %entry
-	%tmp59 = load i32* null, align 4		; <i32> [#uses=1]
-	%tmp70 = icmp eq i32 %tmp59, 2		; <i1> [#uses=1]
-	br i1 %tmp70, label %cond_true.i35, label %bb80
-
-cond_true.i35:		; preds = %cond_next
-	%tmp14.i.i37 = load %struct.quad_struct** null, align 4		; <%struct.quad_struct*> [#uses=1]
-	%tmp3.i160 = load i32* null, align 4		; <i32> [#uses=1]
-	%tmp4.i161 = icmp eq i32 %tmp3.i160, 2		; <i1> [#uses=1]
-	br i1 %tmp4.i161, label %cond_true.i163, label %cond_false.i178
-
-cond_true.i163:		; preds = %cond_true.i35
-	%tmp7.i162 = sdiv i32 %size, 4		; <i32> [#uses=2]
-	%tmp13.i168 = tail call fastcc i32 @sum_adjacent( %struct.quad_struct* null, i32 3, i32 2, i32 %tmp7.i162 )		; <i32> [#uses=1]
-	%tmp18.i11.i170 = getelementptr %struct.quad_struct* %tmp14.i.i37, i32 0, i32 4		; <%struct.quad_struct**> [#uses=1]
-	%tmp19.i12.i171 = load %struct.quad_struct** %tmp18.i11.i170, align 4		; <%struct.quad_struct*> [#uses=1]
-	%tmp21.i173 = tail call fastcc i32 @sum_adjacent( %struct.quad_struct* %tmp19.i12.i171, i32 3, i32 2, i32 %tmp7.i162 )		; <i32> [#uses=1]
-	%tmp22.i174 = add i32 %tmp21.i173, %tmp13.i168		; <i32> [#uses=1]
-	br i1 %tmp4.i161, label %cond_true.i141, label %cond_false.i156
-
-cond_false.i178:		; preds = %cond_true.i35
-	ret i32 0
-
-cond_true.i141:		; preds = %cond_true.i163
-	%tmp7.i140 = sdiv i32 %size, 4		; <i32> [#uses=1]
-	%tmp21.i151 = tail call fastcc i32 @sum_adjacent( %struct.quad_struct* null, i32 3, i32 2, i32 %tmp7.i140 )		; <i32> [#uses=0]
-	ret i32 0
-
-cond_false.i156:		; preds = %cond_true.i163
-	%tmp22.i44 = add i32 0, %tmp22.i174		; <i32> [#uses=0]
-	br i1 %tmp8.i51, label %bb22.i, label %cond_next.i
-
-bb80:		; preds = %cond_next
-	ret i32 0
-
-cond_next.i:		; preds = %cond_false.i156
-	ret i32 0
-
-bb22.i:		; preds = %cond_false.i156
-	ret i32 0
-}
-
-declare fastcc i32 @sum_adjacent(%struct.quad_struct*, i32, i32, i32)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-01-25-EmptyFunction.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-01-25-EmptyFunction.ll
index b936686..387645f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-01-25-EmptyFunction.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-01-25-EmptyFunction.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 | grep nop
+; RUN: llc < %s -march=x86 | grep {.byte	0}
 target triple = "i686-apple-darwin8"
 
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
index 88a5fde..548b44d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2008-07-11-SpillerBug.ll
@@ -1,9 +1,7 @@
-; RUN: llc < %s -march=x86 -relocation-model=static -disable-fp-elim -post-RA-scheduler=false | FileCheck %s
+; RUN: llc < %s -march=x86 -relocation-model=static -disable-fp-elim -post-RA-scheduler=false -asm-verbose=0 | FileCheck %s
 ; PR2536
 
-
-; CHECK: movw %cx
-; CHECK-NEXT: andl    $65534, %
+; CHECK: andl    $65534, %
 ; CHECK-NEXT: movl %
 ; CHECK-NEXT: movl $17
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-02-07-CoalescerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-02-07-CoalescerBug.ll
deleted file mode 100644
index 2d0bbe6..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-02-07-CoalescerBug.ll
+++ /dev/null
@@ -1,491 +0,0 @@
-; RUN: llc < %s -march=x86 -relocation-model=pic -stats |& grep {Number of valno def marked dead} | grep 1
-; rdar://6566708
-
-target triple = "i386-apple-darwin9.6"
-	%"struct..0$_58" = type { i32, %"struct.llvm::MachineOperand"**, %"struct.llvm::MachineOperand"* }
-	%"struct..1$_60" = type { i32 }
-	%"struct..3$_53" = type { i64 }
-	%struct.__false_type = type <{ i8 }>
-	%"struct.llvm::APFloat" = type { %"struct.llvm::fltSemantics"*, %"struct..3$_53", i16, i16 }
-	%"struct.llvm::AbstractTypeUser" = type { i32 (...)** }
-	%"struct.llvm::AnalysisResolver" = type { %"struct.std::vector<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >", %"struct.llvm::PMDataManager"* }
-	%"struct.llvm::Annotable" = type { %"struct.llvm::Annotation"* }
-	%"struct.llvm::Annotation" = type { i32 (...)**, %"struct..1$_60", %"struct.llvm::Annotation"* }
-	%"struct.llvm::Argument" = type { %"struct.llvm::Value", %"struct.llvm::ilist_node<llvm::Argument>", %"struct.llvm::Function"* }
-	%"struct.llvm::AttrListPtr" = type { %"struct.llvm::AttributeListImpl"* }
-	%"struct.llvm::AttributeListImpl" = type opaque
-	%"struct.llvm::BasicBlock" = type { %"struct.llvm::Value", %"struct.llvm::ilist_node<llvm::BasicBlock>", %"struct.llvm::iplist<llvm::Instruction,llvm::ilist_traits<llvm::Instruction> >", %"struct.llvm::Function"* }
-	%"struct.llvm::BitVector" = type { i32*, i32, i32 }
-	%"struct.llvm::BumpPtrAllocator" = type { i8* }
-	%"struct.llvm::CalleeSavedInfo" = type { i32, %"struct.llvm::TargetRegisterClass"*, i32 }
-	%"struct.llvm::CondCodeSDNode" = type { %"struct.llvm::SDNode", i32 }
-	%"struct.llvm::Constant" = type { %"struct.llvm::User" }
-	%"struct.llvm::DebugLocTracker" = type { %"struct.std::vector<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >", %"struct.llvm::DenseMap<llvm::DebugLocTuple,unsigned int,llvm::DenseMapInfo<llvm::DebugLocTuple>,llvm::DenseMapInfo<unsigned int> >" }
-	%"struct.llvm::DebugLocTuple" = type { i32, i32, i32 }
-	%"struct.llvm::DenseMap<llvm::DebugLocTuple,unsigned int,llvm::DenseMapInfo<llvm::DebugLocTuple>,llvm::DenseMapInfo<unsigned int> >" = type { i32, %"struct.std::pair<llvm::DebugLocTuple,unsigned int>"*, i32, i32 }
-	%"struct.llvm::DwarfWriter" = type opaque
-	%"struct.llvm::FoldingSet<llvm::SDNode>" = type { %"struct.llvm::FoldingSetImpl" }
-	%"struct.llvm::FoldingSetImpl" = type { i32 (...)**, i8**, i32, i32 }
-	%"struct.llvm::Function" = type { %"struct.llvm::GlobalValue", %"struct.llvm::Annotable", %"struct.llvm::ilist_node<llvm::Function>", %"struct.llvm::iplist<llvm::BasicBlock,llvm::ilist_traits<llvm::BasicBlock> >", %"struct.llvm::iplist<llvm::Argument,llvm::ilist_traits<llvm::Argument> >", %"struct.llvm::ValueSymbolTable"*, %"struct.llvm::AttrListPtr" }
-	%"struct.llvm::FunctionLoweringInfo" = type opaque
-	%"struct.llvm::GlobalAddressSDNode" = type { %"struct.llvm::SDNode", %"struct.llvm::GlobalValue"*, i64 }
-	%"struct.llvm::GlobalValue" = type { %"struct.llvm::Constant", %"struct.llvm::Module"*, i32, %"struct.std::string" }
-	%"struct.llvm::GlobalVariable" = type { %"struct.llvm::GlobalValue", %"struct.llvm::ilist_node<llvm::GlobalVariable>", i8 }
-	%"struct.llvm::ImmutablePass" = type { %"struct.llvm::ModulePass" }
-	%"struct.llvm::Instruction" = type { %"struct.llvm::User", %"struct.llvm::ilist_node<llvm::Instruction>", %"struct.llvm::BasicBlock"* }
-	%"struct.llvm::LandingPadInfo" = type <{ %"struct.llvm::MachineBasicBlock"*, [12 x i8], %"struct.llvm::SmallVector<unsigned int,1u>", %"struct.llvm::SmallVector<unsigned int,1u>", i32, %"struct.llvm::Function"*, %"struct.std::vector<int,std::allocator<int> >", [3 x i32] }>
-	%"struct.llvm::MVT" = type { %"struct..1$_60" }
-	%"struct.llvm::MachineBasicBlock" = type { %"struct.llvm::ilist_node<llvm::MachineBasicBlock>", %"struct.llvm::ilist<llvm::MachineInstr>", %"struct.llvm::BasicBlock"*, i32, %"struct.llvm::MachineFunction"*, %"struct.std::vector<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >", %"struct.std::vector<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >", %"struct.std::vector<int,std::allocator<int> >", i32, i8 }
-	%"struct.llvm::MachineConstantPool" = type opaque
-	%"struct.llvm::MachineFrameInfo" = type { %"struct.std::vector<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >", i32, i8, i8, i64, i32, i32, i8, i32, i32, %"struct.std::vector<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >", %"struct.llvm::MachineModuleInfo"*, %"struct.llvm::TargetFrameInfo"* }
-	%"struct.llvm::MachineFrameInfo::StackObject" = type { i64, i32, i8, i64 }
-	%"struct.llvm::MachineFunction" = type { %"struct.llvm::Annotation", %"struct.llvm::Function"*, %"struct.llvm::TargetMachine"*, %"struct.llvm::MachineRegisterInfo"*, %"struct.llvm::AbstractTypeUser"*, %"struct.llvm::MachineFrameInfo"*, %"struct.llvm::MachineConstantPool"*, %"struct.llvm::MachineJumpTableInfo"*, %"struct.std::vector<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >", %"struct.llvm::BumpPtrAllocator", %"struct.llvm::Recycler<llvm::MachineBasicBlock,116ul,4ul>", %"struct.llvm::Recycler<llvm::MachineBasicBlock,116ul,4ul>", %"struct.llvm::ilist<llvm::MachineBasicBlock>", %"struct.llvm::DebugLocTracker" }
-	%"struct.llvm::MachineInstr" = type { %"struct.llvm::ilist_node<llvm::MachineInstr>", %"struct.llvm::TargetInstrDesc"*, i16, %"struct.std::vector<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >", %"struct.std::list<llvm::MachineMemOperand,std::allocator<llvm::MachineMemOperand> >", %"struct.llvm::MachineBasicBlock"*, %"struct..1$_60" }
-	%"struct.llvm::MachineJumpTableInfo" = type opaque
-	%"struct.llvm::MachineLocation" = type { i8, i32, i32 }
-	%"struct.llvm::MachineModuleInfo" = type { %"struct.llvm::ImmutablePass", %"struct.std::vector<int,std::allocator<int> >", %"struct.std::vector<llvm::MachineMove,std::allocator<llvm::MachineMove> >", %"struct.std::vector<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >", %"struct.std::vector<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >", %"struct.std::vector<int,std::allocator<int> >", %"struct.std::vector<int,std::allocator<int> >", %"struct.std::vector<llvm::Function*,std::allocator<llvm::Function*> >", %"struct.llvm::SmallPtrSet<const llvm::Function*,32u>", i8, i8, i8 }
-	%"struct.llvm::MachineMove" = type { i32, %"struct.llvm::MachineLocation", %"struct.llvm::MachineLocation" }
-	%"struct.llvm::MachineOperand" = type { i8, i8, i8, %"struct.llvm::MachineInstr"*, %"struct.llvm::MachineOperand::$_57" }
-	%"struct.llvm::MachineOperand::$_57" = type { %"struct..0$_58" }
-	%"struct.llvm::MachineRegisterInfo" = type { %"struct.std::vector<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >", %"struct.std::vector<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >", %"struct.llvm::MachineOperand"**, %"struct.llvm::BitVector", %"struct.std::vector<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >", %"struct.std::vector<int,std::allocator<int> >" }
-	%"struct.llvm::Module" = type opaque
-	%"struct.llvm::ModulePass" = type { %"struct.llvm::Pass" }
-	%"struct.llvm::PATypeHandle" = type { %"struct.llvm::Type"*, %"struct.llvm::AbstractTypeUser"* }
-	%"struct.llvm::PATypeHolder" = type { %"struct.llvm::Type"* }
-	%"struct.llvm::PMDataManager" = type opaque
-	%"struct.llvm::Pass" = type { i32 (...)**, %"struct.llvm::AnalysisResolver"*, i32, %"struct.std::vector<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >" }
-	%"struct.llvm::PassInfo" = type { i8*, i8*, i32, i8, i8, i8, %"struct.std::vector<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >", %"struct.llvm::Pass"* ()* }
-	%"struct.llvm::Recycler<llvm::MachineBasicBlock,116ul,4ul>" = type { %"struct.llvm::iplist<llvm::RecyclerStruct,llvm::ilist_traits<llvm::RecyclerStruct> >" }
-	%"struct.llvm::RecyclerStruct" = type { %"struct.llvm::RecyclerStruct"*, %"struct.llvm::RecyclerStruct"* }
-	%"struct.llvm::RecyclingAllocator<llvm::BumpPtrAllocator,llvm::SDNode,132ul,4ul>" = type { %"struct.llvm::Recycler<llvm::MachineBasicBlock,116ul,4ul>", %"struct.llvm::BumpPtrAllocator" }
-	%"struct.llvm::SDNode" = type { %"struct.llvm::BumpPtrAllocator", %"struct.llvm::ilist_node<llvm::SDNode>", i16, i16, i32, %"struct.llvm::SDUse"*, %"struct.llvm::MVT"*, %"struct.llvm::SDUse"*, i16, i16, %"struct..1$_60" }
-	%"struct.llvm::SDUse" = type { %"struct.llvm::SDValue", %"struct.llvm::SDNode"*, %"struct.llvm::SDUse"**, %"struct.llvm::SDUse"* }
-	%"struct.llvm::SDVTList" = type { %"struct.llvm::MVT"*, i16 }
-	%"struct.llvm::SDValue" = type { %"struct.llvm::SDNode"*, i32 }
-	%"struct.llvm::SelectionDAG" = type { %"struct.llvm::TargetLowering"*, %"struct.llvm::MachineFunction"*, %"struct.llvm::FunctionLoweringInfo"*, %"struct.llvm::MachineModuleInfo"*, %"struct.llvm::DwarfWriter"*, %"struct.llvm::SDNode", %"struct.llvm::SDValue", %"struct.llvm::ilist<llvm::SDNode>", %"struct.llvm::RecyclingAllocator<llvm::BumpPtrAllocator,llvm::SDNode,132ul,4ul>", %"struct.llvm::FoldingSet<llvm::SDNode>", %"struct.llvm::BumpPtrAllocator", %"struct.llvm::BumpPtrAllocator", %"struct.std::map<const llvm::SDNode*,std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >", %"struct.std::vector<llvm::SDVTList,std::allocator<llvm::SDVTList> >", %"struct.std::vector<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >", %"struct.std::vector<llvm::SDNode*,std::allocator<llvm::SDNode*> >", %"struct.std::map<const llvm::SDNode*,std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >", %"struct.llvm::StringMap<llvm::SDNode*,llvm::MallocAllocator>", %"struct.llvm::StringMap<llvm::SDNode*,llvm::MallocAllocator>" }
-	%"struct.llvm::SmallPtrSet<const llvm::Function*,32u>" = type { %"struct.llvm::SmallPtrSetImpl", [32 x i8*] }
-	%"struct.llvm::SmallPtrSetImpl" = type { i8**, i32, i32, i32, [1 x i8*] }
-	%"struct.llvm::SmallVector<llvm::SDValue,16u>" = type <{ [17 x i8], [127 x i8] }>
-	%"struct.llvm::SmallVector<unsigned int,1u>" = type <{ [17 x i8], [3 x i8], [3 x i32] }>
-	%"struct.llvm::StringMap<llvm::SDNode*,llvm::MallocAllocator>" = type { %"struct.llvm::StringMapImpl", %struct.__false_type }
-	%"struct.llvm::StringMapImpl" = type { %"struct.llvm::StringMapImpl::ItemBucket"*, i32, i32, i32, i32 }
-	%"struct.llvm::StringMapImpl::ItemBucket" = type { i32, %"struct..1$_60"* }
-	%"struct.llvm::TargetAsmInfo" = type opaque
-	%"struct.llvm::TargetData" = type <{ %"struct.llvm::ImmutablePass", i8, i8, i8, i8, [4 x i8], %"struct.llvm::SmallVector<llvm::SDValue,16u>" }>
-	%"struct.llvm::TargetFrameInfo" = type { i32 (...)**, i32, i32, i32 }
-	%"struct.llvm::TargetInstrDesc" = type { i16, i16, i16, i16, i8*, i32, i32, i32*, i32*, %"struct.llvm::TargetRegisterClass"**, %"struct.llvm::TargetOperandInfo"* }
-	%"struct.llvm::TargetLowering" = type { i32 (...)**, %"struct.llvm::TargetMachine"*, %"struct.llvm::TargetData"*, %"struct.llvm::MVT", i8, i8, i8, i8, i8, i8, i8, %"struct.llvm::MVT", i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, i32, [30 x %"struct.llvm::TargetRegisterClass"*], [30 x i8], [30 x %"struct.llvm::MVT"], [30 x %"struct.llvm::MVT"], [179 x i64], [4 x i64], [30 x i64], [2 x [5 x i64]], [30 x i64], [24 x i64], %"struct.llvm::TargetLowering::ValueTypeActionImpl", %"struct.std::vector<llvm::APFloat,std::allocator<llvm::APFloat> >", %"struct.std::vector<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >", [23 x i8], %"struct.std::map<const llvm::SDNode*,std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >", [180 x i8*], [180 x i32], i32, i32, i32, i8 }
-	%"struct.llvm::TargetLowering::ValueTypeActionImpl" = type { [2 x i32] }
-	%"struct.llvm::TargetMachine" = type { i32 (...)**, %"struct.llvm::TargetAsmInfo"* }
-	%"struct.llvm::TargetOperandInfo" = type { i16, i16, i32 }
-	%"struct.llvm::TargetRegisterClass" = type { i32 (...)**, i32, i8, %"struct.llvm::MVT"*, %"struct.llvm::TargetRegisterClass"**, %"struct.llvm::TargetRegisterClass"**, i32, i32, i32, i32*, i32* }
-	%"struct.llvm::Type" = type { %"struct.llvm::AbstractTypeUser", i8, [3 x i8], i32, %"struct.llvm::Type"*, %"struct.std::vector<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >", i32, %"struct.llvm::PATypeHandle"* }
-	%"struct.llvm::Use" = type { %"struct.llvm::Value"*, %"struct.llvm::Use"*, %"struct..1$_60" }
-	%"struct.llvm::User" = type { %"struct.llvm::Value", %"struct.llvm::Use"*, i32 }
-	%"struct.llvm::Value" = type { i32 (...)**, i16, i16, %"struct.llvm::PATypeHolder", %"struct.llvm::Use"*, %"struct.llvm::ValueName"* }
-	%"struct.llvm::ValueName" = type opaque
-	%"struct.llvm::ValueSymbolTable" = type opaque
-	%"struct.llvm::fltSemantics" = type opaque
-	%"struct.llvm::ilist<llvm::MachineBasicBlock>" = type { %"struct.llvm::iplist<llvm::MachineBasicBlock,llvm::ilist_traits<llvm::MachineBasicBlock> >" }
-	%"struct.llvm::ilist<llvm::MachineInstr>" = type { %"struct.llvm::iplist<llvm::MachineInstr,llvm::ilist_traits<llvm::MachineInstr> >" }
-	%"struct.llvm::ilist<llvm::SDNode>" = type { %"struct.llvm::iplist<llvm::SDNode,llvm::ilist_traits<llvm::SDNode> >" }
-	%"struct.llvm::ilist_node<llvm::Argument>" = type { %"struct.llvm::Argument"*, %"struct.llvm::Argument"* }
-	%"struct.llvm::ilist_node<llvm::BasicBlock>" = type { %"struct.llvm::BasicBlock"*, %"struct.llvm::BasicBlock"* }
-	%"struct.llvm::ilist_node<llvm::Function>" = type { %"struct.llvm::Function"*, %"struct.llvm::Function"* }
-	%"struct.llvm::ilist_node<llvm::GlobalVariable>" = type { %"struct.llvm::GlobalVariable"*, %"struct.llvm::GlobalVariable"* }
-	%"struct.llvm::ilist_node<llvm::Instruction>" = type { %"struct.llvm::Instruction"*, %"struct.llvm::Instruction"* }
-	%"struct.llvm::ilist_node<llvm::MachineBasicBlock>" = type { %"struct.llvm::MachineBasicBlock"*, %"struct.llvm::MachineBasicBlock"* }
-	%"struct.llvm::ilist_node<llvm::MachineInstr>" = type { %"struct.llvm::MachineInstr"*, %"struct.llvm::MachineInstr"* }
-	%"struct.llvm::ilist_node<llvm::SDNode>" = type { %"struct.llvm::SDNode"*, %"struct.llvm::SDNode"* }
-	%"struct.llvm::ilist_traits<llvm::MachineBasicBlock>" = type { %"struct.llvm::MachineBasicBlock" }
-	%"struct.llvm::ilist_traits<llvm::MachineInstr>" = type { %"struct.llvm::MachineInstr", %"struct.llvm::MachineBasicBlock"* }
-	%"struct.llvm::ilist_traits<llvm::RecyclerStruct>" = type { %"struct.llvm::RecyclerStruct" }
-	%"struct.llvm::ilist_traits<llvm::SDNode>" = type { %"struct.llvm::SDNode" }
-	%"struct.llvm::iplist<llvm::Argument,llvm::ilist_traits<llvm::Argument> >" = type { %"struct.llvm::Argument"* }
-	%"struct.llvm::iplist<llvm::BasicBlock,llvm::ilist_traits<llvm::BasicBlock> >" = type { %"struct.llvm::BasicBlock"* }
-	%"struct.llvm::iplist<llvm::Instruction,llvm::ilist_traits<llvm::Instruction> >" = type { %"struct.llvm::Instruction"* }
-	%"struct.llvm::iplist<llvm::MachineBasicBlock,llvm::ilist_traits<llvm::MachineBasicBlock> >" = type { %"struct.llvm::ilist_traits<llvm::MachineBasicBlock>", %"struct.llvm::MachineBasicBlock"* }
-	%"struct.llvm::iplist<llvm::MachineInstr,llvm::ilist_traits<llvm::MachineInstr> >" = type { %"struct.llvm::ilist_traits<llvm::MachineInstr>", %"struct.llvm::MachineInstr"* }
-	%"struct.llvm::iplist<llvm::RecyclerStruct,llvm::ilist_traits<llvm::RecyclerStruct> >" = type { %"struct.llvm::ilist_traits<llvm::RecyclerStruct>", %"struct.llvm::RecyclerStruct"* }
-	%"struct.llvm::iplist<llvm::SDNode,llvm::ilist_traits<llvm::SDNode> >" = type { %"struct.llvm::ilist_traits<llvm::SDNode>", %"struct.llvm::SDNode"* }
-	%"struct.std::_List_base<llvm::MachineMemOperand,std::allocator<llvm::MachineMemOperand> >" = type { %"struct.llvm::ilist_traits<llvm::RecyclerStruct>" }
-	%"struct.std::_Rb_tree<const llvm::SDNode*,std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >,std::_Select1st<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >" = type { %"struct.std::_Rb_tree<const llvm::SDNode*,std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >,std::_Select1st<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_Rb_tree_impl<std::less<const llvm::SDNode*>,false>" }
-	%"struct.std::_Rb_tree<const llvm::SDNode*,std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >,std::_Select1st<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_Rb_tree_impl<std::less<const llvm::SDNode*>,false>" = type { %struct.__false_type, %"struct.std::_Rb_tree_node_base", i32 }
-	%"struct.std::_Rb_tree_node_base" = type { i32, %"struct.std::_Rb_tree_node_base"*, %"struct.std::_Rb_tree_node_base"*, %"struct.std::_Rb_tree_node_base"* }
-	%"struct.std::_Vector_base<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >" = type { %"struct.std::_Vector_base<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >::_Vector_impl" = type { %"struct.llvm::PassInfo"**, %"struct.llvm::PassInfo"**, %"struct.llvm::PassInfo"** }
-	%"struct.std::_Vector_base<int,std::allocator<int> >" = type { %"struct.std::_Vector_base<int,std::allocator<int> >::_Vector_impl" }
-	%"struct.std::_Vector_base<int,std::allocator<int> >::_Vector_impl" = type { i32*, i32*, i32* }
-	%"struct.std::_Vector_base<llvm::APFloat,std::allocator<llvm::APFloat> >" = type { %"struct.std::_Vector_base<llvm::APFloat,std::allocator<llvm::APFloat> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::APFloat,std::allocator<llvm::APFloat> >::_Vector_impl" = type { %"struct.llvm::APFloat"*, %"struct.llvm::APFloat"*, %"struct.llvm::APFloat"* }
-	%"struct.std::_Vector_base<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >" = type { %"struct.std::_Vector_base<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >::_Vector_impl" = type { %"struct.llvm::AbstractTypeUser"**, %"struct.llvm::AbstractTypeUser"**, %"struct.llvm::AbstractTypeUser"** }
-	%"struct.std::_Vector_base<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >" = type { %"struct.std::_Vector_base<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >::_Vector_impl" = type { %"struct.llvm::CalleeSavedInfo"*, %"struct.llvm::CalleeSavedInfo"*, %"struct.llvm::CalleeSavedInfo"* }
-	%"struct.std::_Vector_base<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >" = type { %"struct.std::_Vector_base<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >::_Vector_impl" = type { %"struct.llvm::CondCodeSDNode"**, %"struct.llvm::CondCodeSDNode"**, %"struct.llvm::CondCodeSDNode"** }
-	%"struct.std::_Vector_base<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >" = type { %"struct.std::_Vector_base<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >::_Vector_impl" = type { %"struct.llvm::DebugLocTuple"*, %"struct.llvm::DebugLocTuple"*, %"struct.llvm::DebugLocTuple"* }
-	%"struct.std::_Vector_base<llvm::Function*,std::allocator<llvm::Function*> >" = type { %"struct.std::_Vector_base<llvm::Function*,std::allocator<llvm::Function*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::Function*,std::allocator<llvm::Function*> >::_Vector_impl" = type { %"struct.llvm::Function"**, %"struct.llvm::Function"**, %"struct.llvm::Function"** }
-	%"struct.std::_Vector_base<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >" = type { %"struct.std::_Vector_base<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >::_Vector_impl" = type { %"struct.llvm::GlobalVariable"**, %"struct.llvm::GlobalVariable"**, %"struct.llvm::GlobalVariable"** }
-	%"struct.std::_Vector_base<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >" = type { %"struct.std::_Vector_base<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >::_Vector_impl" = type { %"struct.llvm::LandingPadInfo"*, %"struct.llvm::LandingPadInfo"*, %"struct.llvm::LandingPadInfo"* }
-	%"struct.std::_Vector_base<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >" = type { %"struct.std::_Vector_base<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >::_Vector_impl" = type { %"struct.llvm::MachineBasicBlock"**, %"struct.llvm::MachineBasicBlock"**, %"struct.llvm::MachineBasicBlock"** }
-	%"struct.std::_Vector_base<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >" = type { %"struct.std::_Vector_base<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >::_Vector_impl" = type { %"struct.llvm::MachineFrameInfo::StackObject"*, %"struct.llvm::MachineFrameInfo::StackObject"*, %"struct.llvm::MachineFrameInfo::StackObject"* }
-	%"struct.std::_Vector_base<llvm::MachineMove,std::allocator<llvm::MachineMove> >" = type { %"struct.std::_Vector_base<llvm::MachineMove,std::allocator<llvm::MachineMove> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::MachineMove,std::allocator<llvm::MachineMove> >::_Vector_impl" = type { %"struct.llvm::MachineMove"*, %"struct.llvm::MachineMove"*, %"struct.llvm::MachineMove"* }
-	%"struct.std::_Vector_base<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >" = type { %"struct.std::_Vector_base<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >::_Vector_impl" = type { %"struct.llvm::MachineOperand"*, %"struct.llvm::MachineOperand"*, %"struct.llvm::MachineOperand"* }
-	%"struct.std::_Vector_base<llvm::SDNode*,std::allocator<llvm::SDNode*> >" = type { %"struct.std::_Vector_base<llvm::SDNode*,std::allocator<llvm::SDNode*> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::SDNode*,std::allocator<llvm::SDNode*> >::_Vector_impl" = type { %"struct.llvm::SDNode"**, %"struct.llvm::SDNode"**, %"struct.llvm::SDNode"** }
-	%"struct.std::_Vector_base<llvm::SDVTList,std::allocator<llvm::SDVTList> >" = type { %"struct.std::_Vector_base<llvm::SDVTList,std::allocator<llvm::SDVTList> >::_Vector_impl" }
-	%"struct.std::_Vector_base<llvm::SDVTList,std::allocator<llvm::SDVTList> >::_Vector_impl" = type { %"struct.llvm::SDVTList"*, %"struct.llvm::SDVTList"*, %"struct.llvm::SDVTList"* }
-	%"struct.std::_Vector_base<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >" = type { %"struct.std::_Vector_base<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >::_Vector_impl" = type { %"struct.std::pair<const llvm::PassInfo*,llvm::Pass*>"*, %"struct.std::pair<const llvm::PassInfo*,llvm::Pass*>"*, %"struct.std::pair<const llvm::PassInfo*,llvm::Pass*>"* }
-	%"struct.std::_Vector_base<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >" = type { %"struct.std::_Vector_base<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >::_Vector_impl" = type { %"struct.std::pair<const llvm::TargetRegisterClass*,llvm::MachineOperand*>"*, %"struct.std::pair<const llvm::TargetRegisterClass*,llvm::MachineOperand*>"*, %"struct.std::pair<const llvm::TargetRegisterClass*,llvm::MachineOperand*>"* }
-	%"struct.std::_Vector_base<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >" = type { %"struct.std::_Vector_base<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >::_Vector_impl" = type { %"struct.std::pair<llvm::MVT,llvm::TargetRegisterClass*>"*, %"struct.std::pair<llvm::MVT,llvm::TargetRegisterClass*>"*, %"struct.std::pair<llvm::MVT,llvm::TargetRegisterClass*>"* }
-	%"struct.std::_Vector_base<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >" = type { %"struct.std::_Vector_base<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >::_Vector_impl" }
-	%"struct.std::_Vector_base<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >::_Vector_impl" = type { %"struct.std::pair<int,int>"*, %"struct.std::pair<int,int>"*, %"struct.std::pair<int,int>"* }
-	%"struct.std::_Vector_base<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >" = type { %"struct.std::_Vector_base<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >::_Vector_impl" }
-	%"struct.std::_Vector_base<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >::_Vector_impl" = type { %"struct.std::vector<int,std::allocator<int> >"*, %"struct.std::vector<int,std::allocator<int> >"*, %"struct.std::vector<int,std::allocator<int> >"* }
-	%"struct.std::list<llvm::MachineMemOperand,std::allocator<llvm::MachineMemOperand> >" = type { %"struct.std::_List_base<llvm::MachineMemOperand,std::allocator<llvm::MachineMemOperand> >" }
-	%"struct.std::map<const llvm::SDNode*,std::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >" = type { %"struct.std::_Rb_tree<const llvm::SDNode*,std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >,std::_Select1st<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >,std::less<const llvm::SDNode*>,std::allocator<std::pair<const llvm::SDNode* const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >" }
-	%"struct.std::pair<const llvm::PassInfo*,llvm::Pass*>" = type { %"struct.llvm::PassInfo"*, %"struct.llvm::Pass"* }
-	%"struct.std::pair<const llvm::TargetRegisterClass*,llvm::MachineOperand*>" = type { %"struct.llvm::TargetRegisterClass"*, %"struct.llvm::MachineOperand"* }
-	%"struct.std::pair<int,int>" = type { i32, i32 }
-	%"struct.std::pair<llvm::DebugLocTuple,unsigned int>" = type { %"struct.llvm::DebugLocTuple", i32 }
-	%"struct.std::pair<llvm::MVT,llvm::TargetRegisterClass*>" = type { %"struct.llvm::MVT", %"struct.llvm::TargetRegisterClass"* }
-	%"struct.std::string" = type { %"struct.llvm::BumpPtrAllocator" }
-	%"struct.std::vector<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >" = type { %"struct.std::_Vector_base<const llvm::PassInfo*,std::allocator<const llvm::PassInfo*> >" }
-	%"struct.std::vector<int,std::allocator<int> >" = type { %"struct.std::_Vector_base<int,std::allocator<int> >" }
-	%"struct.std::vector<llvm::APFloat,std::allocator<llvm::APFloat> >" = type { %"struct.std::_Vector_base<llvm::APFloat,std::allocator<llvm::APFloat> >" }
-	%"struct.std::vector<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >" = type { %"struct.std::_Vector_base<llvm::AbstractTypeUser*,std::allocator<llvm::AbstractTypeUser*> >" }
-	%"struct.std::vector<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >" = type { %"struct.std::_Vector_base<llvm::CalleeSavedInfo,std::allocator<llvm::CalleeSavedInfo> >" }
-	%"struct.std::vector<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >" = type { %"struct.std::_Vector_base<llvm::CondCodeSDNode*,std::allocator<llvm::CondCodeSDNode*> >" }
-	%"struct.std::vector<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >" = type { %"struct.std::_Vector_base<llvm::DebugLocTuple,std::allocator<llvm::DebugLocTuple> >" }
-	%"struct.std::vector<llvm::Function*,std::allocator<llvm::Function*> >" = type { %"struct.std::_Vector_base<llvm::Function*,std::allocator<llvm::Function*> >" }
-	%"struct.std::vector<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >" = type { %"struct.std::_Vector_base<llvm::GlobalVariable*,std::allocator<llvm::GlobalVariable*> >" }
-	%"struct.std::vector<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >" = type { %"struct.std::_Vector_base<llvm::LandingPadInfo,std::allocator<llvm::LandingPadInfo> >" }
-	%"struct.std::vector<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >" = type { %"struct.std::_Vector_base<llvm::MachineBasicBlock*,std::allocator<llvm::MachineBasicBlock*> >" }
-	%"struct.std::vector<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >" = type { %"struct.std::_Vector_base<llvm::MachineFrameInfo::StackObject,std::allocator<llvm::MachineFrameInfo::StackObject> >" }
-	%"struct.std::vector<llvm::MachineMove,std::allocator<llvm::MachineMove> >" = type { %"struct.std::_Vector_base<llvm::MachineMove,std::allocator<llvm::MachineMove> >" }
-	%"struct.std::vector<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >" = type { %"struct.std::_Vector_base<llvm::MachineOperand,std::allocator<llvm::MachineOperand> >" }
-	%"struct.std::vector<llvm::SDNode*,std::allocator<llvm::SDNode*> >" = type { %"struct.std::_Vector_base<llvm::SDNode*,std::allocator<llvm::SDNode*> >" }
-	%"struct.std::vector<llvm::SDVTList,std::allocator<llvm::SDVTList> >" = type { %"struct.std::_Vector_base<llvm::SDVTList,std::allocator<llvm::SDVTList> >" }
-	%"struct.std::vector<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >" = type { %"struct.std::_Vector_base<std::pair<const llvm::PassInfo*, llvm::Pass*>,std::allocator<std::pair<const llvm::PassInfo*, llvm::Pass*> > >" }
-	%"struct.std::vector<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >" = type { %"struct.std::_Vector_base<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*>,std::allocator<std::pair<const llvm::TargetRegisterClass*, llvm::MachineOperand*> > >" }
-	%"struct.std::vector<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >" = type { %"struct.std::_Vector_base<std::pair<llvm::MVT, llvm::TargetRegisterClass*>,std::allocator<std::pair<llvm::MVT, llvm::TargetRegisterClass*> > >" }
-	%"struct.std::vector<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >" = type { %"struct.std::_Vector_base<std::pair<unsigned int, unsigned int>,std::allocator<std::pair<unsigned int, unsigned int> > >" }
-	%"struct.std::vector<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >" = type { %"struct.std::_Vector_base<std::vector<unsigned int, std::allocator<unsigned int> >,std::allocator<std::vector<unsigned int, std::allocator<unsigned int> > > >" }
-@"\01LC81" = internal constant [65 x i8] c"/Users/echeng/LLVM/llvm/include/llvm/CodeGen/SelectionDAGNodes.h\00"		; <[65 x i8]*> [#uses=1]
- at _ZZNK4llvm6SDNode12getValueTypeEjE8__func__ = internal constant [13 x i8] c"getValueType\00"		; <[13 x i8]*> [#uses=1]
-@"\01LC83" = internal constant [46 x i8] c"ResNo < NumValues && \22Illegal result number!\22\00"		; <[46 x i8]*> [#uses=1]
-@"\01LC197" = internal constant [16 x i8] c"___tls_get_addr\00"		; <[16 x i8]*> [#uses=1]
- at llvm.used1 = appending global [1 x i8*] [ i8* bitcast (i64 (%"struct.llvm::GlobalAddressSDNode"*, %"struct.llvm::SelectionDAG"*, %"struct.llvm::MVT"*)* @_ZL31LowerToTLSGeneralDynamicModel32PN4llvm19GlobalAddressSDNodeERNS_12SelectionDAGENS_3MVTE to i8*) ], section "llvm.metadata"		; <[1 x i8*]*> [#uses=0]
-
-define fastcc i64 @_ZL31LowerToTLSGeneralDynamicModel32PN4llvm19GlobalAddressSDNodeERNS_12SelectionDAGENS_3MVTE(%"struct.llvm::GlobalAddressSDNode"* %GA, %"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %PtrVT) nounwind noinline {
-entry:
-	%VT2.i185 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT1.i186 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%Ops.i187 = alloca [4 x %"struct.llvm::SDValue"], align 8		; <[4 x %"struct.llvm::SDValue"]*> [#uses=9]
-	%0 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT182 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT2.i173 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT1.i174 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%Ops.i175 = alloca [4 x %"struct.llvm::SDValue"], align 8		; <[4 x %"struct.llvm::SDValue"]*> [#uses=9]
-	%1 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT3.i = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT2.i = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%VT1.i = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%Ops.i = alloca [3 x %"struct.llvm::SDValue"], align 8		; <[3 x %"struct.llvm::SDValue"]*> [#uses=7]
-	%VT = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%Ops1 = alloca [5 x %"struct.llvm::SDValue"], align 8		; <[5 x %"struct.llvm::SDValue"]*> [#uses=11]
-	%Ops = alloca [3 x %"struct.llvm::SDValue"], align 8		; <[3 x %"struct.llvm::SDValue"]*> [#uses=7]
-	%NodeTys = alloca %"struct.llvm::SDVTList", align 8		; <%"struct.llvm::SDVTList"*> [#uses=4]
-	%2 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%3 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%4 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%5 = alloca %"struct.llvm::MVT", align 8		; <%"struct.llvm::MVT"*> [#uses=2]
-	%6 = getelementptr %"struct.llvm::GlobalAddressSDNode"* %GA, i32 0, i32 0, i32 10, i32 0		; <i32*> [#uses=1]
-	%7 = load i32* %6, align 4		; <i32> [#uses=5]
-	%8 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocENS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 208, i32 0, %"struct.llvm::MVT"* byval align 4 %PtrVT) nounwind		; <i64> [#uses=2]
-	%9 = trunc i64 %8 to i32		; <i32> [#uses=1]
-	%sroa.store.elt = lshr i64 %8, 32		; <i64> [#uses=1]
-	%10 = trunc i64 %sroa.store.elt to i32		; <i32> [#uses=3]
-	%tmp52 = inttoptr i32 %9 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=3]
-	%11 = getelementptr %"struct.llvm::SelectionDAG"* %DAG, i32 0, i32 5		; <%"struct.llvm::SDNode"*> [#uses=1]
-	%12 = getelementptr %"struct.llvm::MVT"* %VT1.i186, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 0, i32* %12, align 8
-	%13 = getelementptr %"struct.llvm::MVT"* %VT2.i185, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 12, i32* %13, align 8
-	%14 = call i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %VT1.i186, %"struct.llvm::MVT"* byval align 4 %VT2.i185) nounwind		; <i64> [#uses=1]
-	%15 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 0, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %11, %"struct.llvm::SDNode"** %15, align 8
-	%16 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	store i32 0, i32* %16, align 4
-	%17 = getelementptr %"struct.llvm::SDNode"* %tmp52, i32 0, i32 9		; <i16*> [#uses=1]
-	%18 = load i16* %17, align 2		; <i16> [#uses=1]
-	%19 = zext i16 %18 to i32		; <i32> [#uses=1]
-	%20 = icmp ugt i32 %19, %10		; <i1> [#uses=1]
-	br i1 %20, label %_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit193, label %bb.i.i.i188
-
-bb.i.i.i188:		; preds = %entry
-	call void @__assert_rtn(i8* getelementptr ([13 x i8]* @_ZZNK4llvm6SDNode12getValueTypeEjE8__func__, i32 0, i32 0), i8* getelementptr ([65 x i8]* @"\01LC81", i32 0, i32 0), i32 1314, i8* getelementptr ([46 x i8]* @"\01LC83", i32 0, i32 0)) noreturn nounwind
-	unreachable
-
-_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit193:		; preds = %entry
-	%21 = trunc i64 %14 to i32		; <i32> [#uses=1]
-	%tmp4.i.i189 = inttoptr i32 %21 to %"struct.llvm::MVT"*		; <%"struct.llvm::MVT"*> [#uses=1]
-	%22 = getelementptr %"struct.llvm::SDNode"* %tmp52, i32 0, i32 6		; <%"struct.llvm::MVT"**> [#uses=1]
-	%23 = load %"struct.llvm::MVT"** %22, align 4		; <%"struct.llvm::MVT"*> [#uses=1]
-	%24 = getelementptr %"struct.llvm::MVT"* %23, i32 %10, i32 0, i32 0		; <i32*> [#uses=1]
-	%25 = load i32* %24, align 4		; <i32> [#uses=1]
-	%26 = getelementptr %"struct.llvm::MVT"* %0, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 %25, i32* %26, align 8
-	%27 = call i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 19, %"struct.llvm::MVT"* byval align 4 %0) nounwind		; <i64> [#uses=2]
-	%28 = trunc i64 %27 to i32		; <i32> [#uses=1]
-	%sroa.store.elt.i190 = lshr i64 %27, 32		; <i64> [#uses=1]
-	%29 = trunc i64 %sroa.store.elt.i190 to i32		; <i32> [#uses=1]
-	%30 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 1, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp5.i191 = inttoptr i32 %28 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp5.i191, %"struct.llvm::SDNode"** %30, align 8
-	%31 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 1, i32 1		; <i32*> [#uses=1]
-	store i32 %29, i32* %31, align 4
-	%32 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 2, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp52, %"struct.llvm::SDNode"** %32, align 8
-	%33 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 2, i32 1		; <i32*> [#uses=1]
-	store i32 %10, i32* %33, align 4
-	%34 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 3, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* null, %"struct.llvm::SDNode"** %34, align 8
-	%35 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 3, i32 1		; <i32*> [#uses=1]
-	store i32 0, i32* %35, align 4
-	%36 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i187, i32 0, i32 0		; <%"struct.llvm::SDValue"*> [#uses=1]
-	%37 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocEPKNS_3MVTEjPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"* %DAG, i32 36, i32 %7, %"struct.llvm::MVT"* %tmp4.i.i189, i32 2, %"struct.llvm::SDValue"* %36, i32 3) nounwind		; <i64> [#uses=2]
-	%38 = trunc i64 %37 to i32		; <i32> [#uses=1]
-	%tmp66 = inttoptr i32 %38 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=2]
-	%39 = getelementptr %"struct.llvm::MVT"* %5, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 12, i32* %39, align 8
-	%40 = getelementptr %"struct.llvm::MVT"* %4, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 0, i32* %40, align 8
-	%41 = call i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_S1_(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %PtrVT, %"struct.llvm::MVT"* byval align 4 %4, %"struct.llvm::MVT"* byval align 4 %5) nounwind		; <i64> [#uses=2]
-	%42 = trunc i64 %41 to i32		; <i32> [#uses=1]
-	%sroa.store.elt75 = lshr i64 %41, 32		; <i64> [#uses=1]
-	%43 = trunc i64 %sroa.store.elt75 to i16		; <i16> [#uses=1]
-	%44 = getelementptr %"struct.llvm::SDVTList"* %NodeTys, i32 0, i32 0		; <%"struct.llvm::MVT"**> [#uses=2]
-	%tmp78 = inttoptr i32 %42 to %"struct.llvm::MVT"*		; <%"struct.llvm::MVT"*> [#uses=1]
-	store %"struct.llvm::MVT"* %tmp78, %"struct.llvm::MVT"** %44, align 8
-	%45 = getelementptr %"struct.llvm::SDVTList"* %NodeTys, i32 0, i32 1		; <i16*> [#uses=2]
-	store i16 %43, i16* %45, align 4
-	%46 = getelementptr %"struct.llvm::GlobalAddressSDNode"* %GA, i32 0, i32 0, i32 9		; <i16*> [#uses=1]
-	%47 = load i16* %46, align 2		; <i16> [#uses=1]
-	%48 = icmp eq i16 %47, 0		; <i1> [#uses=1]
-	br i1 %48, label %bb.i, label %_ZNK4llvm6SDNode12getValueTypeEj.exit
-
-bb.i:		; preds = %_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit193
-	call void @__assert_rtn(i8* getelementptr ([13 x i8]* @_ZZNK4llvm6SDNode12getValueTypeEjE8__func__, i32 0, i32 0), i8* getelementptr ([65 x i8]* @"\01LC81", i32 0, i32 0), i32 1314, i8* getelementptr ([46 x i8]* @"\01LC83", i32 0, i32 0)) noreturn nounwind
-	unreachable
-
-_ZNK4llvm6SDNode12getValueTypeEj.exit:		; preds = %_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit193
-	%sroa.store.elt63 = lshr i64 %37, 32		; <i64> [#uses=1]
-	%49 = trunc i64 %sroa.store.elt63 to i32		; <i32> [#uses=1]
-	%50 = getelementptr %"struct.llvm::GlobalAddressSDNode"* %GA, i32 0, i32 2		; <i64*> [#uses=1]
-	%51 = load i64* %50, align 4		; <i64> [#uses=1]
-	%52 = getelementptr %"struct.llvm::GlobalAddressSDNode"* %GA, i32 0, i32 0, i32 6		; <%"struct.llvm::MVT"**> [#uses=1]
-	%53 = load %"struct.llvm::MVT"** %52, align 4		; <%"struct.llvm::MVT"*> [#uses=1]
-	%54 = getelementptr %"struct.llvm::MVT"* %53, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	%55 = load i32* %54, align 4		; <i32> [#uses=1]
-	%56 = getelementptr %"struct.llvm::GlobalAddressSDNode"* %GA, i32 0, i32 1		; <%"struct.llvm::GlobalValue"**> [#uses=1]
-	%57 = load %"struct.llvm::GlobalValue"** %56, align 4		; <%"struct.llvm::GlobalValue"*> [#uses=1]
-	%58 = getelementptr %"struct.llvm::MVT"* %VT182, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 %55, i32* %58, align 8
-	%59 = call i64 @_ZN4llvm12SelectionDAG16getGlobalAddressEPKNS_11GlobalValueENS_3MVTExb(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::GlobalValue"* %57, %"struct.llvm::MVT"* byval align 4 %VT182, i64 %51, i8 zeroext 1) nounwind		; <i64> [#uses=2]
-	%60 = trunc i64 %59 to i32		; <i32> [#uses=1]
-	%sroa.store.elt83 = lshr i64 %59, 32		; <i64> [#uses=1]
-	%61 = trunc i64 %sroa.store.elt83 to i32		; <i32> [#uses=1]
-	%tmp86 = inttoptr i32 %60 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	%62 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 0, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp66, %"struct.llvm::SDNode"** %62, align 8
-	%63 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	store i32 %49, i32* %63, align 4
-	%64 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 1, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp86, %"struct.llvm::SDNode"** %64, align 8
-	%65 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 1, i32 1		; <i32*> [#uses=1]
-	store i32 %61, i32* %65, align 4
-	%66 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 2, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp66, %"struct.llvm::SDNode"** %66, align 8
-	%67 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 2, i32 1		; <i32*> [#uses=1]
-	store i32 1, i32* %67, align 4
-	%68 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops, i32 0, i32 0		; <%"struct.llvm::SDValue"*> [#uses=1]
-	%69 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocENS_8SDVTListEPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"* %DAG, i32 220, i32 %7, %"struct.llvm::SDVTList"* byval align 4 %NodeTys, %"struct.llvm::SDValue"* %68, i32 3) nounwind		; <i64> [#uses=2]
-	%70 = trunc i64 %69 to i32		; <i32> [#uses=1]
-	%sroa.store.elt89 = lshr i64 %69, 32		; <i64> [#uses=1]
-	%71 = trunc i64 %sroa.store.elt89 to i32		; <i32> [#uses=3]
-	%tmp92 = inttoptr i32 %70 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=7]
-	call void @_ZNK4llvm6SDNode4dumpEv(%"struct.llvm::SDNode"* %tmp92) nounwind
-	%72 = getelementptr %"struct.llvm::MVT"* %VT1.i174, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 0, i32* %72, align 8
-	%73 = getelementptr %"struct.llvm::MVT"* %VT2.i173, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 12, i32* %73, align 8
-	%74 = call i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %VT1.i174, %"struct.llvm::MVT"* byval align 4 %VT2.i173) nounwind		; <i64> [#uses=1]
-	%75 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 0, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp92, %"struct.llvm::SDNode"** %75, align 8
-	%76 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	store i32 1, i32* %76, align 4
-	%77 = getelementptr %"struct.llvm::SDNode"* %tmp92, i32 0, i32 9		; <i16*> [#uses=1]
-	%78 = load i16* %77, align 2		; <i16> [#uses=1]
-	%79 = zext i16 %78 to i32		; <i32> [#uses=1]
-	%80 = icmp ugt i32 %79, %71		; <i1> [#uses=1]
-	br i1 %80, label %_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit, label %bb.i.i.i
-
-bb.i.i.i:		; preds = %_ZNK4llvm6SDNode12getValueTypeEj.exit
-	call void @__assert_rtn(i8* getelementptr ([13 x i8]* @_ZZNK4llvm6SDNode12getValueTypeEjE8__func__, i32 0, i32 0), i8* getelementptr ([65 x i8]* @"\01LC81", i32 0, i32 0), i32 1314, i8* getelementptr ([46 x i8]* @"\01LC83", i32 0, i32 0)) noreturn nounwind
-	unreachable
-
-_ZN4llvm12SelectionDAG12getCopyToRegENS_7SDValueENS_8DebugLocEjS1_S1_.exit:		; preds = %_ZNK4llvm6SDNode12getValueTypeEj.exit
-	%81 = trunc i64 %74 to i32		; <i32> [#uses=1]
-	%tmp4.i.i176 = inttoptr i32 %81 to %"struct.llvm::MVT"*		; <%"struct.llvm::MVT"*> [#uses=1]
-	%82 = getelementptr %"struct.llvm::SDNode"* %tmp92, i32 0, i32 6		; <%"struct.llvm::MVT"**> [#uses=1]
-	%83 = load %"struct.llvm::MVT"** %82, align 4		; <%"struct.llvm::MVT"*> [#uses=1]
-	%84 = getelementptr %"struct.llvm::MVT"* %83, i32 %71, i32 0, i32 0		; <i32*> [#uses=1]
-	%85 = load i32* %84, align 4		; <i32> [#uses=1]
-	%86 = getelementptr %"struct.llvm::MVT"* %1, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 %85, i32* %86, align 8
-	%87 = call i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 17, %"struct.llvm::MVT"* byval align 4 %1) nounwind		; <i64> [#uses=2]
-	%88 = trunc i64 %87 to i32		; <i32> [#uses=1]
-	%sroa.store.elt.i177 = lshr i64 %87, 32		; <i64> [#uses=1]
-	%89 = trunc i64 %sroa.store.elt.i177 to i32		; <i32> [#uses=1]
-	%90 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 1, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp5.i178 = inttoptr i32 %88 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp5.i178, %"struct.llvm::SDNode"** %90, align 8
-	%91 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 1, i32 1		; <i32*> [#uses=1]
-	store i32 %89, i32* %91, align 4
-	%92 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 2, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp92, %"struct.llvm::SDNode"** %92, align 8
-	%93 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 2, i32 1		; <i32*> [#uses=1]
-	store i32 %71, i32* %93, align 4
-	%94 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 3, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp92, %"struct.llvm::SDNode"** %94, align 8
-	%95 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 3, i32 1		; <i32*> [#uses=1]
-	store i32 2, i32* %95, align 4
-	%96 = icmp eq %"struct.llvm::SDNode"* %tmp92, null		; <i1> [#uses=1]
-	%iftmp.583.0.i = select i1 %96, i32 3, i32 4		; <i32> [#uses=1]
-	%97 = getelementptr [4 x %"struct.llvm::SDValue"]* %Ops.i175, i32 0, i32 0		; <%"struct.llvm::SDValue"*> [#uses=1]
-	%98 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocEPKNS_3MVTEjPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"* %DAG, i32 36, i32 %7, %"struct.llvm::MVT"* %tmp4.i.i176, i32 2, %"struct.llvm::SDValue"* %97, i32 %iftmp.583.0.i) nounwind		; <i64> [#uses=2]
-	%99 = trunc i64 %98 to i32		; <i32> [#uses=1]
-	%sroa.store.elt107 = lshr i64 %98, 32		; <i64> [#uses=1]
-	%100 = trunc i64 %sroa.store.elt107 to i32		; <i32> [#uses=1]
-	%tmp110 = inttoptr i32 %99 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=2]
-	%101 = getelementptr %"struct.llvm::MVT"* %3, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 12, i32* %101, align 8
-	%102 = getelementptr %"struct.llvm::MVT"* %2, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 0, i32* %102, align 8
-	%103 = call i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %2, %"struct.llvm::MVT"* byval align 4 %3) nounwind		; <i64> [#uses=2]
-	%104 = trunc i64 %103 to i32		; <i32> [#uses=1]
-	%sroa.store.elt119 = lshr i64 %103, 32		; <i64> [#uses=1]
-	%105 = trunc i64 %sroa.store.elt119 to i16		; <i16> [#uses=1]
-	%tmp122 = inttoptr i32 %104 to %"struct.llvm::MVT"*		; <%"struct.llvm::MVT"*> [#uses=1]
-	store %"struct.llvm::MVT"* %tmp122, %"struct.llvm::MVT"** %44, align 8
-	store i16 %105, i16* %45, align 4
-	%106 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 0, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp110, %"struct.llvm::SDNode"** %106, align 8
-	%107 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	store i32 %100, i32* %107, align 4
-	%108 = call i64 @_ZN4llvm12SelectionDAG23getTargetExternalSymbolEPKcNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i8* getelementptr ([16 x i8]* @"\01LC197", i32 0, i32 0), %"struct.llvm::MVT"* byval align 4 %PtrVT) nounwind		; <i64> [#uses=2]
-	%109 = trunc i64 %108 to i32		; <i32> [#uses=1]
-	%sroa.store.elt125 = lshr i64 %108, 32		; <i64> [#uses=1]
-	%110 = trunc i64 %sroa.store.elt125 to i32		; <i32> [#uses=1]
-	%111 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 1, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp128 = inttoptr i32 %109 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp128, %"struct.llvm::SDNode"** %111, align 8
-	%112 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 1, i32 1		; <i32*> [#uses=1]
-	store i32 %110, i32* %112, align 4
-	%113 = call i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 17, %"struct.llvm::MVT"* byval align 4 %PtrVT) nounwind		; <i64> [#uses=2]
-	%114 = trunc i64 %113 to i32		; <i32> [#uses=1]
-	%sroa.store.elt131 = lshr i64 %113, 32		; <i64> [#uses=1]
-	%115 = trunc i64 %sroa.store.elt131 to i32		; <i32> [#uses=1]
-	%116 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 2, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp134 = inttoptr i32 %114 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp134, %"struct.llvm::SDNode"** %116, align 8
-	%117 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 2, i32 1		; <i32*> [#uses=1]
-	store i32 %115, i32* %117, align 4
-	%118 = call i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 19, %"struct.llvm::MVT"* byval align 4 %PtrVT) nounwind		; <i64> [#uses=2]
-	%119 = trunc i64 %118 to i32		; <i32> [#uses=1]
-	%sroa.store.elt137 = lshr i64 %118, 32		; <i64> [#uses=1]
-	%120 = trunc i64 %sroa.store.elt137 to i32		; <i32> [#uses=1]
-	%121 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 3, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp140 = inttoptr i32 %119 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp140, %"struct.llvm::SDNode"** %121, align 8
-	%122 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 3, i32 1		; <i32*> [#uses=1]
-	store i32 %120, i32* %122, align 4
-	%123 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 4, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp110, %"struct.llvm::SDNode"** %123, align 8
-	%124 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 4, i32 1		; <i32*> [#uses=1]
-	store i32 1, i32* %124, align 4
-	%125 = getelementptr [5 x %"struct.llvm::SDValue"]* %Ops1, i32 0, i32 0		; <%"struct.llvm::SDValue"*> [#uses=1]
-	%126 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocENS_8SDVTListEPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"* %DAG, i32 195, i32 %7, %"struct.llvm::SDVTList"* byval align 4 %NodeTys, %"struct.llvm::SDValue"* %125, i32 5) nounwind		; <i64> [#uses=2]
-	%127 = trunc i64 %126 to i32		; <i32> [#uses=1]
-	%sroa.store.elt143 = lshr i64 %126, 32		; <i64> [#uses=1]
-	%128 = trunc i64 %sroa.store.elt143 to i32		; <i32> [#uses=1]
-	%tmp146 = inttoptr i32 %127 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=3]
-	%tmp171195 = getelementptr %"struct.llvm::MVT"* %PtrVT, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	%tmp197 = load i32* %tmp171195, align 1		; <i32> [#uses=2]
-	%129 = getelementptr %"struct.llvm::MVT"* %VT, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 %tmp197, i32* %129, align 8
-	%130 = getelementptr %"struct.llvm::MVT"* %VT1.i, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 %tmp197, i32* %130, align 8
-	%131 = getelementptr %"struct.llvm::MVT"* %VT2.i, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 0, i32* %131, align 8
-	%132 = getelementptr %"struct.llvm::MVT"* %VT3.i, i32 0, i32 0, i32 0		; <i32*> [#uses=1]
-	store i32 12, i32* %132, align 8
-	%133 = call i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_S1_(%"struct.llvm::SelectionDAG"* %DAG, %"struct.llvm::MVT"* byval align 4 %VT1.i, %"struct.llvm::MVT"* byval align 4 %VT2.i, %"struct.llvm::MVT"* byval align 4 %VT3.i) nounwind		; <i64> [#uses=1]
-	%134 = trunc i64 %133 to i32		; <i32> [#uses=1]
-	%tmp4.i.i = inttoptr i32 %134 to %"struct.llvm::MVT"*		; <%"struct.llvm::MVT"*> [#uses=1]
-	%135 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 0, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp146, %"struct.llvm::SDNode"** %135, align 8
-	%136 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 0, i32 1		; <i32*> [#uses=1]
-	store i32 %128, i32* %136, align 4
-	%137 = call i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"* %DAG, i32 17, %"struct.llvm::MVT"* byval align 4 %VT) nounwind		; <i64> [#uses=2]
-	%138 = trunc i64 %137 to i32		; <i32> [#uses=1]
-	%sroa.store.elt.i = lshr i64 %137, 32		; <i64> [#uses=1]
-	%139 = trunc i64 %sroa.store.elt.i to i32		; <i32> [#uses=1]
-	%140 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 1, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	%tmp5.i = inttoptr i32 %138 to %"struct.llvm::SDNode"*		; <%"struct.llvm::SDNode"*> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp5.i, %"struct.llvm::SDNode"** %140, align 8
-	%141 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 1, i32 1		; <i32*> [#uses=1]
-	store i32 %139, i32* %141, align 4
-	%142 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 2, i32 0		; <%"struct.llvm::SDNode"**> [#uses=1]
-	store %"struct.llvm::SDNode"* %tmp146, %"struct.llvm::SDNode"** %142, align 8
-	%143 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 2, i32 1		; <i32*> [#uses=1]
-	store i32 1, i32* %143, align 4
-	%144 = icmp eq %"struct.llvm::SDNode"* %tmp146, null		; <i1> [#uses=1]
-	%iftmp.588.0.i = select i1 %144, i32 2, i32 3		; <i32> [#uses=1]
-	%145 = getelementptr [3 x %"struct.llvm::SDValue"]* %Ops.i, i32 0, i32 0		; <%"struct.llvm::SDValue"*> [#uses=1]
-	%146 = call i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocEPKNS_3MVTEjPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"* %DAG, i32 37, i32 %7, %"struct.llvm::MVT"* %tmp4.i.i, i32 3, %"struct.llvm::SDValue"* %145, i32 %iftmp.588.0.i) nounwind		; <i64> [#uses=1]
-	ret i64 %146
-}
-
-declare void @__assert_rtn(i8*, i8*, i32, i8*) noreturn
-
-declare i64 @_ZN4llvm12SelectionDAG16getGlobalAddressEPKNS_11GlobalValueENS_3MVTExb(%"struct.llvm::SelectionDAG"*, %"struct.llvm::GlobalValue"*, %"struct.llvm::MVT"* byval align 4, i64, i8 zeroext)
-
-declare i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_(%"struct.llvm::SelectionDAG"*, %"struct.llvm::MVT"* byval align 4, %"struct.llvm::MVT"* byval align 4)
-
-declare i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocENS_8SDVTListEPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"*, i32, i32, %"struct.llvm::SDVTList"* byval align 4, %"struct.llvm::SDValue"*, i32)
-
-declare i64 @_ZN4llvm12SelectionDAG11getRegisterEjNS_3MVTE(%"struct.llvm::SelectionDAG"*, i32, %"struct.llvm::MVT"* byval align 4)
-
-declare i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocEPKNS_3MVTEjPKNS_7SDValueEj(%"struct.llvm::SelectionDAG"*, i32, i32, %"struct.llvm::MVT"*, i32, %"struct.llvm::SDValue"*, i32)
-
-declare i64 @_ZN4llvm12SelectionDAG9getVTListENS_3MVTES1_S1_(%"struct.llvm::SelectionDAG"*, %"struct.llvm::MVT"* byval align 4, %"struct.llvm::MVT"* byval align 4, %"struct.llvm::MVT"* byval align 4)
-
-declare i64 @_ZN4llvm12SelectionDAG23getTargetExternalSymbolEPKcNS_3MVTE(%"struct.llvm::SelectionDAG"*, i8*, %"struct.llvm::MVT"* byval align 4)
-
-declare i64 @_ZN4llvm12SelectionDAG7getNodeEjNS_8DebugLocENS_3MVTE(%"struct.llvm::SelectionDAG"*, i32, i32, %"struct.llvm::MVT"* byval align 4)
-
-declare void @_ZNK4llvm6SDNode4dumpEv(%"struct.llvm::SDNode"*)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
index 5bd956a..abbe97a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-04-21-NoReloadImpDef.ll
@@ -1,4 +1,4 @@
-; RUN: llc -mtriple=i386-apple-darwin10.0 -relocation-model=pic \
+; RUN: llc -mtriple=i386-apple-darwin10.0 -relocation-model=pic -asm-verbose=false \
 ; RUN:     -disable-fp-elim -mattr=-sse41,-sse3,+sse2 -post-RA-scheduler=false < %s | \
 ; RUN:   FileCheck %s
 ; rdar://6808032
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-07-CoalescerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-07-CoalescerBug.ll
index 55432be..a5b4a79 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-07-CoalescerBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-07-CoalescerBug.ll
@@ -8,8 +8,8 @@
 define i64 @hammer_time(i64 %modulep, i64 %physfree) nounwind ssp noredzone noimplicitfloat {
 ; CHECK: hammer_time:
 ; CHECK: movq $Xrsvd, %rax
+; CHECK: movq $Xrsvd, %rsi
 ; CHECK: movq $Xrsvd, %rdi
-; CHECK: movq $Xrsvd, %r8
 entry:
   br i1 undef, label %if.then, label %if.end
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-LoadFoldingBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-LoadFoldingBug.ll
index 9e58872..7b5e871 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-LoadFoldingBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-09-10-LoadFoldingBug.ll
@@ -13,7 +13,6 @@ define i32 @t(i32 %clientPort, i32 %pluginID, i32 %requestID, i32 %objectID, i64
 entry:
 ; CHECK: _t:
 ; CHECK: movl 16(%rbp),
-; CHECK: movl 16(%rbp), %edx
   %0 = zext i32 %argumentsLength to i64           ; <i64> [#uses=1]
   %1 = zext i32 %clientPort to i64                ; <i64> [#uses=1]
   %2 = inttoptr i64 %1 to %struct.ComplexType*    ; <%struct.ComplexType*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
index 628b899..b5be65f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2009-11-04-SubregCoalescingBug.ll
@@ -5,7 +5,7 @@ define void @bar(i32 %b, i32 %a) nounwind optsize ssp {
 entry:
 ; CHECK:     leal 15(%rsi), %edi
 ; CHECK-NOT: movl
-; CHECK:     callq _foo
+; CHECK:     _foo
   %0 = add i32 %a, 15                             ; <i32> [#uses=1]
   %1 = zext i32 %0 to i64                         ; <i64> [#uses=1]
   tail call void @foo(i64 %1) nounwind
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2010-02-01-TaillCallCrash.ll b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-01-TaillCallCrash.ll
new file mode 100644
index 0000000..2751174
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-01-TaillCallCrash.ll
@@ -0,0 +1,12 @@
+; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu
+; PR6196
+
+%"char[]" = type [1 x i8]
+
+ at .str = external constant %"char[]", align 1      ; <%"char[]"*> [#uses=1]
+
+define i32 @regex_subst() nounwind {
+entry:
+  %0 = tail call i32 bitcast (%"char[]"* @.str to i32 (i32)*)(i32 0) nounwind ; <i32> [#uses=1]
+  ret i32 %0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2010-02-03-DualUndef.ll b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-03-DualUndef.ll
new file mode 100644
index 0000000..d116ecc
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-03-DualUndef.ll
@@ -0,0 +1,27 @@
+; RUN: llc < %s -march=x86-64
+; PR6086
+define fastcc void @prepOutput() nounwind {
+bb:                                               ; preds = %output.exit
+  br label %bb.i1
+
+bb.i1:                                            ; preds = %bb7.i, %bb
+  br i1 undef, label %bb7.i, label %bb.nph.i
+
+bb.nph.i:                                         ; preds = %bb.i1
+  br label %bb3.i
+
+bb3.i:                                            ; preds = %bb5.i6, %bb.nph.i
+  %tmp10.i = trunc i64 undef to i32               ; <i32> [#uses=1]
+  br i1 undef, label %bb4.i, label %bb5.i6
+
+bb4.i:                                            ; preds = %bb3.i
+  br label %bb5.i6
+
+bb5.i6:                                           ; preds = %bb4.i, %bb3.i
+  %0 = phi i32 [ undef, %bb4.i ], [ undef, %bb3.i ] ; <i32> [#uses=1]
+  %1 = icmp slt i32 %0, %tmp10.i                  ; <i1> [#uses=1]
+  br i1 %1, label %bb7.i, label %bb3.i
+
+bb7.i:                                            ; preds = %bb5.i6, %bb.i1
+  br label %bb.i1
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2010-02-04-SchedulerBug.ll b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-04-SchedulerBug.ll
new file mode 100644
index 0000000..c966e21
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-04-SchedulerBug.ll
@@ -0,0 +1,28 @@
+; RUN: llc < %s -mtriple=i386-apple-darwin11
+; rdar://7604000
+
+%struct.a_t = type { i8*, i64*, i8*, i32, i32, i64*, i64*, i64* }
+%struct.b_t = type { i32, i32, i32, i32, i64, i64, i64, i64 }
+
+define void @t(i32 %cNum, i64 %max) nounwind optsize ssp noimplicitfloat {
+entry:
+  %0 = load %struct.b_t** null, align 4 ; <%struct.b_t*> [#uses=1]
+  %1 = getelementptr inbounds %struct.b_t* %0, i32 %cNum, i32 5 ; <i64*> [#uses=1]
+  %2 = load i64* %1, align 4                      ; <i64> [#uses=1]
+  %3 = icmp ult i64 %2, %max            ; <i1> [#uses=1]
+  %4 = getelementptr inbounds %struct.a_t* null, i32 0, i32 7 ; <i64**> [#uses=1]
+  %5 = load i64** %4, align 4                     ; <i64*> [#uses=0]
+  %6 = load i64* null, align 4                    ; <i64> [#uses=1]
+  br i1 %3, label %bb2, label %bb
+
+bb:                                               ; preds = %entry
+  br label %bb3
+
+bb2:                                              ; preds = %entry
+  %7 = or i64 %6, undef                           ; <i64> [#uses=1]
+  br label %bb3
+
+bb3:                                              ; preds = %bb2, %bb
+  %misc_enables.0 = phi i64 [ undef, %bb ], [ %7, %bb2 ] ; <i64> [#uses=0]
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/2010-02-12-CoalescerBug-Impdef.ll b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-12-CoalescerBug-Impdef.ll
new file mode 100644
index 0000000..c5d3d16
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/2010-02-12-CoalescerBug-Impdef.ll
@@ -0,0 +1,260 @@
+; RUN: llc < %s > %t
+; PR6283
+
+; Tricky coalescer bug:
+; After coalescing %RAX with a virtual register, this instruction was rematted:
+;
+;   %EAX<def> = MOV32rr %reg1070<kill>
+;
+; This instruction silently defined %RAX, and when rematting removed the
+; instruction, the live interval for %RAX was not properly updated. The valno
+; referred to a deleted instruction and bad things happened.
+;
+; The fix is to implicitly define %RAX when coalescing:
+;
+;   %EAX<def> = MOV32rr %reg1070<kill>, %RAX<imp-def>
+;
+
+target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64"
+target triple = "x86_64-unknown-linux-gnu"
+
+module asm "\09.ident\09\22GCC: (GNU) 4.5.0 20100212 (experimental) LLVM: 95975\22"
+
+%0 = type { %"union gimple_statement_d"* }
+%"BITMAP_WORD[]" = type [2 x i64]
+%"char[]" = type [4 x i8]
+%"enum dom_state[]" = type [2 x i32]
+%"int[]" = type [4 x i32]
+%"struct VEC_basic_block_base" = type { i32, i32, [1 x %"struct basic_block_def"*] }
+%"struct VEC_basic_block_gc" = type { %"struct VEC_basic_block_base" }
+%"struct VEC_edge_base" = type { i32, i32, [1 x %"struct edge_def"*] }
+%"struct VEC_edge_gc" = type { %"struct VEC_edge_base" }
+%"struct VEC_gimple_base" = type { i32, i32, [1 x %"union gimple_statement_d"*] }
+%"struct VEC_gimple_gc" = type { %"struct VEC_gimple_base" }
+%"struct VEC_iv_cand_p_base" = type { i32, i32, [1 x %"struct iv_cand"*] }
+%"struct VEC_iv_cand_p_heap" = type { %"struct VEC_iv_cand_p_base" }
+%"struct VEC_iv_use_p_base" = type { i32, i32, [1 x %"struct iv_use"*] }
+%"struct VEC_iv_use_p_heap" = type { %"struct VEC_iv_use_p_base" }
+%"struct VEC_loop_p_base" = type { i32, i32, [1 x %"struct loop"*] }
+%"struct VEC_loop_p_gc" = type { %"struct VEC_loop_p_base" }
+%"struct VEC_rtx_base" = type { i32, i32, [1 x %"struct rtx_def"*] }
+%"struct VEC_rtx_gc" = type { %"struct VEC_rtx_base" }
+%"struct VEC_tree_base" = type { i32, i32, [1 x %"union tree_node"*] }
+%"struct VEC_tree_gc" = type { %"struct VEC_tree_base" }
+%"struct _obstack_chunk" = type { i8*, %"struct _obstack_chunk"*, %"char[]" }
+%"struct basic_block_def" = type { %"struct VEC_edge_gc"*, %"struct VEC_edge_gc"*, i8*, %"struct loop"*, [2 x %"struct et_node"*], %"struct basic_block_def"*, %"struct basic_block_def"*, %"union basic_block_il_dependent", i64, i32, i32, i32, i32, i32 }
+%"struct bitmap_element" = type { %"struct bitmap_element"*, %"struct bitmap_element"*, i32, %"BITMAP_WORD[]" }
+%"struct bitmap_head_def" = type { %"struct bitmap_element"*, %"struct bitmap_element"*, i32, %"struct bitmap_obstack"* }
+%"struct bitmap_obstack" = type { %"struct bitmap_element"*, %"struct bitmap_head_def"*, %"struct obstack" }
+%"struct block_symbol" = type { [3 x %"union rtunion"], %"struct object_block"*, i64 }
+%"struct comp_cost" = type { i32, i32 }
+%"struct control_flow_graph" = type { %"struct basic_block_def"*, %"struct basic_block_def"*, %"struct VEC_basic_block_gc"*, i32, i32, i32, %"struct VEC_basic_block_gc"*, i32, %"enum dom_state[]", %"enum dom_state[]", i32, i32 }
+%"struct cost_pair" = type { %"struct iv_cand"*, %"struct comp_cost", %"struct bitmap_head_def"*, %"union tree_node"* }
+%"struct def_optype_d" = type { %"struct def_optype_d"*, %"union tree_node"** }
+%"struct double_int" = type { i64, i64 }
+%"struct edge_def" = type { %"struct basic_block_def"*, %"struct basic_block_def"*, %"union edge_def_insns", i8*, %"union tree_node"*, i32, i32, i32, i32, i64 }
+%"struct eh_status" = type opaque
+%"struct et_node" = type opaque
+%"struct function" = type { %"struct eh_status"*, %"struct control_flow_graph"*, %"struct gimple_seq_d"*, %"struct gimple_df"*, %"struct loops"*, %"struct htab"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"struct machine_function"*, %"struct language_function"*, %"struct htab"*, i32, i32, i32, i32, i32, i32, i8*, i8, i8, i8, i8 }
+%"struct gimple_bb_info" = type { %"struct gimple_seq_d"*, %"struct gimple_seq_d"* }
+%"struct gimple_df" = type { %"struct htab"*, %"struct VEC_gimple_gc"*, %"struct VEC_tree_gc"*, %"union tree_node"*, %"struct pt_solution", %"struct pt_solution", %"struct pointer_map_t"*, %"union tree_node"*, %"struct htab"*, %"struct bitmap_head_def"*, i8, %"struct ssa_operands" }
+%"struct gimple_seq_d" = type { %"struct gimple_seq_node_d"*, %"struct gimple_seq_node_d"*, %"struct gimple_seq_d"* }
+%"struct gimple_seq_node_d" = type { %"union gimple_statement_d"*, %"struct gimple_seq_node_d"*, %"struct gimple_seq_node_d"* }
+%"struct gimple_statement_base" = type { i8, i8, i16, i32, i32, i32, %"struct basic_block_def"*, %"union tree_node"* }
+%"struct gimple_statement_phi" = type { %"struct gimple_statement_base", i32, i32, %"union tree_node"*, %"struct phi_arg_d[]" }
+%"struct htab" = type { i32 (i8*)*, i32 (i8*, i8*)*, void (i8*)*, i8**, i64, i64, i64, i32, i32, i8* (i64, i64)*, void (i8*)*, i8*, i8* (i8*, i64, i64)*, void (i8*, i8*)*, i32 }
+%"struct iv" = type { %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, i8, i8, i32 }
+%"struct iv_cand" = type { i32, i8, i32, %"union gimple_statement_d"*, %"union tree_node"*, %"union tree_node"*, %"struct iv"*, i32, i32, %"struct iv_use"*, %"struct bitmap_head_def"* }
+%"struct iv_use" = type { i32, i32, %"struct iv"*, %"union gimple_statement_d"*, %"union tree_node"**, %"struct bitmap_head_def"*, i32, %"struct cost_pair"*, %"struct iv_cand"* }
+%"struct ivopts_data" = type { %"struct loop"*, %"struct pointer_map_t"*, i32, i32, %"struct version_info"*, %"struct bitmap_head_def"*, %"struct VEC_iv_use_p_heap"*, %"struct VEC_iv_cand_p_heap"*, %"struct bitmap_head_def"*, i32, i8, i8 }
+%"struct lang_decl" = type opaque
+%"struct language_function" = type opaque
+%"struct loop" = type { i32, i32, %"struct basic_block_def"*, %"struct basic_block_def"*, %"struct comp_cost", i32, i32, %"struct VEC_loop_p_gc"*, %"struct loop"*, %"struct loop"*, i8*, %"union tree_node"*, %"struct double_int", %"struct double_int", i8, i8, i32, %"struct nb_iter_bound"*, %"struct loop_exit"*, i8, %"union tree_node"* }
+%"struct loop_exit" = type { %"struct edge_def"*, %"struct loop_exit"*, %"struct loop_exit"*, %"struct loop_exit"* }
+%"struct loops" = type { i32, %"struct VEC_loop_p_gc"*, %"struct htab"*, %"struct loop"* }
+%"struct machine_cfa_state" = type { %"struct rtx_def"*, i64 }
+%"struct machine_function" = type { %"struct stack_local_entry"*, i8*, i32, i32, %"int[]", i32, %"struct machine_cfa_state", i32, i8 }
+%"struct nb_iter_bound" = type { %"union gimple_statement_d"*, %"struct double_int", i8, %"struct nb_iter_bound"* }
+%"struct object_block" = type { %"union section"*, i32, i64, %"struct VEC_rtx_gc"*, %"struct VEC_rtx_gc"* }
+%"struct obstack" = type { i64, %"struct _obstack_chunk"*, i8*, i8*, i8*, i64, i32, %"struct _obstack_chunk"* (i8*, i64)*, void (i8*, %"struct _obstack_chunk"*)*, i8*, i8 }
+%"struct phi_arg_d" = type { %"struct ssa_use_operand_d", %"union tree_node"*, i32 }
+%"struct phi_arg_d[]" = type [1 x %"struct phi_arg_d"]
+%"struct pointer_map_t" = type opaque
+%"struct pt_solution" = type { i8, %"struct bitmap_head_def"* }
+%"struct rtx_def" = type { i16, i8, i8, %"union u" }
+%"struct section_common" = type { i32 }
+%"struct ssa_operand_memory_d" = type { %"struct ssa_operand_memory_d"*, %"uchar[]" }
+%"struct ssa_operands" = type { %"struct ssa_operand_memory_d"*, i32, i32, i8, %"struct def_optype_d"*, %"struct use_optype_d"* }
+%"struct ssa_use_operand_d" = type { %"struct ssa_use_operand_d"*, %"struct ssa_use_operand_d"*, %0, %"union tree_node"** }
+%"struct stack_local_entry" = type opaque
+%"struct tree_base" = type <{ i16, i8, i8, i8, [2 x i8], i8 }>
+%"struct tree_common" = type { %"struct tree_base", %"union tree_node"*, %"union tree_node"* }
+%"struct tree_decl_common" = type { %"struct tree_decl_minimal", %"union tree_node"*, i8, i8, i8, i8, i8, i32, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"struct lang_decl"* }
+%"struct tree_decl_minimal" = type { %"struct tree_common", i32, i32, %"union tree_node"*, %"union tree_node"* }
+%"struct tree_decl_non_common" = type { %"struct tree_decl_with_vis", %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"* }
+%"struct tree_decl_with_rtl" = type { %"struct tree_decl_common", %"struct rtx_def"* }
+%"struct tree_decl_with_vis" = type { %"struct tree_decl_with_rtl", %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, i8, i8, i8 }
+%"struct tree_function_decl" = type { %"struct tree_decl_non_common", %"struct function"*, %"union tree_node"*, %"union tree_node"*, %"union tree_node"*, i16, i8, i8 }
+%"struct unnamed_section" = type { %"struct section_common", void (i8*)*, i8*, %"union section"* }
+%"struct use_optype_d" = type { %"struct use_optype_d"*, %"struct ssa_use_operand_d" }
+%"struct version_info" = type { %"union tree_node"*, %"struct iv"*, i8, i32, i8 }
+%"uchar[]" = type [1 x i8]
+%"union basic_block_il_dependent" = type { %"struct gimple_bb_info"* }
+%"union edge_def_insns" = type { %"struct gimple_seq_d"* }
+%"union gimple_statement_d" = type { %"struct gimple_statement_phi" }
+%"union rtunion" = type { i8* }
+%"union section" = type { %"struct unnamed_section" }
+%"union tree_node" = type { %"struct tree_function_decl" }
+%"union u" = type { %"struct block_symbol" }
+
+declare fastcc %"union tree_node"* @get_computation_at(%"struct loop"*, %"struct iv_use"* nocapture, %"struct iv_cand"* nocapture, %"union gimple_statement_d"*) nounwind
+
+declare fastcc i32 @computation_cost(%"union tree_node"*, i8 zeroext) nounwind
+
+define fastcc i64 @get_computation_cost_at(%"struct ivopts_data"* %data, %"struct iv_use"* nocapture %use, %"struct iv_cand"* nocapture %cand, i8 zeroext %address_p, %"struct bitmap_head_def"** %depends_on, %"union gimple_statement_d"* %at, i8* %can_autoinc) nounwind {
+entry:
+  br i1 undef, label %"100", label %"4"
+
+"4":                                              ; preds = %entry
+  br i1 undef, label %"6", label %"5"
+
+"5":                                              ; preds = %"4"
+  unreachable
+
+"6":                                              ; preds = %"4"
+  br i1 undef, label %"8", label %"7"
+
+"7":                                              ; preds = %"6"
+  unreachable
+
+"8":                                              ; preds = %"6"
+  br i1 undef, label %"100", label %"10"
+
+"10":                                             ; preds = %"8"
+  br i1 undef, label %"17", label %"16"
+
+"16":                                             ; preds = %"10"
+  unreachable
+
+"17":                                             ; preds = %"10"
+  br i1 undef, label %"19", label %"18"
+
+"18":                                             ; preds = %"17"
+  unreachable
+
+"19":                                             ; preds = %"17"
+  br i1 undef, label %"93", label %"20"
+
+"20":                                             ; preds = %"19"
+  br i1 undef, label %"23", label %"21"
+
+"21":                                             ; preds = %"20"
+  unreachable
+
+"23":                                             ; preds = %"20"
+  br i1 undef, label %"100", label %"25"
+
+"25":                                             ; preds = %"23"
+  br i1 undef, label %"100", label %"26"
+
+"26":                                             ; preds = %"25"
+  br i1 undef, label %"30", label %"28"
+
+"28":                                             ; preds = %"26"
+  unreachable
+
+"30":                                             ; preds = %"26"
+  br i1 undef, label %"59", label %"51"
+
+"51":                                             ; preds = %"30"
+  br i1 undef, label %"55", label %"52"
+
+"52":                                             ; preds = %"51"
+  unreachable
+
+"55":                                             ; preds = %"51"
+  %0 = icmp ugt i32 0, undef                      ; <i1> [#uses=1]
+  br i1 %0, label %"50.i", label %"9.i"
+
+"9.i":                                            ; preds = %"55"
+  unreachable
+
+"50.i":                                           ; preds = %"55"
+  br i1 undef, label %"55.i", label %"54.i"
+
+"54.i":                                           ; preds = %"50.i"
+  br i1 undef, label %"57.i", label %"55.i"
+
+"55.i":                                           ; preds = %"54.i", %"50.i"
+  unreachable
+
+"57.i":                                           ; preds = %"54.i"
+  br label %"63.i"
+
+"61.i":                                           ; preds = %"63.i"
+  br i1 undef, label %"64.i", label %"62.i"
+
+"62.i":                                           ; preds = %"61.i"
+  br label %"63.i"
+
+"63.i":                                           ; preds = %"62.i", %"57.i"
+  br i1 undef, label %"61.i", label %"64.i"
+
+"64.i":                                           ; preds = %"63.i", %"61.i"
+  unreachable
+
+"59":                                             ; preds = %"30"
+  br i1 undef, label %"60", label %"82"
+
+"60":                                             ; preds = %"59"
+  br i1 undef, label %"61", label %"82"
+
+"61":                                             ; preds = %"60"
+  br i1 undef, label %"62", label %"82"
+
+"62":                                             ; preds = %"61"
+  br i1 undef, label %"100", label %"63"
+
+"63":                                             ; preds = %"62"
+  br i1 undef, label %"65", label %"64"
+
+"64":                                             ; preds = %"63"
+  unreachable
+
+"65":                                             ; preds = %"63"
+  br i1 undef, label %"66", label %"67"
+
+"66":                                             ; preds = %"65"
+  unreachable
+
+"67":                                             ; preds = %"65"
+  %1 = load i32* undef, align 4                   ; <i32> [#uses=0]
+  br label %"100"
+
+"82":                                             ; preds = %"61", %"60", %"59"
+  unreachable
+
+"93":                                             ; preds = %"19"
+  %2 = call fastcc %"union tree_node"* @get_computation_at(%"struct loop"* undef, %"struct iv_use"* %use, %"struct iv_cand"* %cand, %"union gimple_statement_d"* %at) nounwind ; <%"union tree_node"*> [#uses=1]
+  br i1 undef, label %"100", label %"97"
+
+"97":                                             ; preds = %"93"
+  br i1 undef, label %"99", label %"98"
+
+"98":                                             ; preds = %"97"
+  br label %"99"
+
+"99":                                             ; preds = %"98", %"97"
+  %3 = phi %"union tree_node"* [ undef, %"98" ], [ %2, %"97" ] ; <%"union tree_node"*> [#uses=1]
+  %4 = call fastcc i32 @computation_cost(%"union tree_node"* %3, i8 zeroext undef) nounwind ; <i32> [#uses=1]
+  br label %"100"
+
+"100":                                            ; preds = %"99", %"93", %"67", %"62", %"25", %"23", %"8", %entry
+  %memtmp1.1.0 = phi i32 [ 0, %"99" ], [ 10000000, %entry ], [ 10000000, %"8" ], [ 10000000, %"23" ], [ 10000000, %"25" ], [ undef, %"62" ], [ undef, %"67" ], [ 10000000, %"93" ] ; <i32> [#uses=1]
+  %memtmp1.0.0 = phi i32 [ %4, %"99" ], [ 10000000, %entry ], [ 10000000, %"8" ], [ 10000000, %"23" ], [ 10000000, %"25" ], [ undef, %"62" ], [ undef, %"67" ], [ 10000000, %"93" ] ; <i32> [#uses=1]
+  %5 = zext i32 %memtmp1.0.0 to i64               ; <i64> [#uses=1]
+  %6 = zext i32 %memtmp1.1.0 to i64               ; <i64> [#uses=1]
+  %7 = shl i64 %6, 32                             ; <i64> [#uses=1]
+  %8 = or i64 %7, %5                              ; <i64> [#uses=1]
+  ret i64 %8
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/SwitchLowering.ll b/libclamav/c++/llvm/test/CodeGen/X86/SwitchLowering.ll
similarity index 100%
rename from libclamav/c++/llvm/test/CodeGen/Generic/SwitchLowering.ll
rename to libclamav/c++/llvm/test/CodeGen/X86/SwitchLowering.ll
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/add-trick32.ll b/libclamav/c++/llvm/test/CodeGen/X86/add-trick32.ll
deleted file mode 100644
index e86045d..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/add-trick32.ll
+++ /dev/null
@@ -1,11 +0,0 @@
-; RUN: llc < %s -march=x86 > %t
-; RUN: not grep add %t
-; RUN: grep subl %t | count 1
-
-; The immediate can be encoded in a smaller way if the
-; instruction is a sub instead of an add.
-
-define i32 @foo(i32 inreg %a) nounwind {
-  %b = add i32 %a, 128
-  ret i32 %b
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/add-trick64.ll b/libclamav/c++/llvm/test/CodeGen/X86/add-trick64.ll
deleted file mode 100644
index 2f1fcee..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/add-trick64.ll
+++ /dev/null
@@ -1,15 +0,0 @@
-; RUN: llc < %s -march=x86-64 > %t
-; RUN: not grep add %t
-; RUN: grep subq %t | count 2
-
-; The immediate can be encoded in a smaller way if the
-; instruction is a sub instead of an add.
-
-define i64 @foo(i64 inreg %a) nounwind {
-  %b = add i64 %a, 2147483648
-  ret i64 %b
-}
-define i64 @bar(i64 inreg %a) nounwind {
-  %b = add i64 %a, 128
-  ret i64 %b
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/add-with-overflow.ll b/libclamav/c++/llvm/test/CodeGen/X86/add-with-overflow.ll
deleted file mode 100644
index 0f705dc..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/add-with-overflow.ll
+++ /dev/null
@@ -1,75 +0,0 @@
-; RUN: llc < %s -march=x86 | grep {jo} | count 2
-; RUN: llc < %s -march=x86 | grep {jb} | count 2
-; RUN: llc < %s -march=x86 -O0 | grep {jo} | count 2
-; RUN: llc < %s -march=x86 -O0 | grep {jb} | count 2
-
- at ok = internal constant [4 x i8] c"%d\0A\00"
- at no = internal constant [4 x i8] c"no\0A\00"
-
-define i1 @func1(i32 %v1, i32 %v2) nounwind {
-entry:
-  %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
-  %sum = extractvalue {i32, i1} %t, 0
-  %obit = extractvalue {i32, i1} %t, 1
-  br i1 %obit, label %overflow, label %normal
-
-normal:
-  %t1 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @ok, i32 0, i32 0), i32 %sum ) nounwind
-  ret i1 true
-
-overflow:
-  %t2 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @no, i32 0, i32 0) ) nounwind
-  ret i1 false
-}
-
-define i1 @func2(i32 %v1, i32 %v2) nounwind {
-entry:
-  %t = call {i32, i1} @llvm.uadd.with.overflow.i32(i32 %v1, i32 %v2)
-  %sum = extractvalue {i32, i1} %t, 0
-  %obit = extractvalue {i32, i1} %t, 1
-  br i1 %obit, label %carry, label %normal
-
-normal:
-  %t1 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @ok, i32 0, i32 0), i32 %sum ) nounwind
-  ret i1 true
-
-carry:
-  %t2 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @no, i32 0, i32 0) ) nounwind
-  ret i1 false
-}
-
-define i1 @func3() nounwind {
-entry:
-  %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 0, i32 0)
-  %sum = extractvalue {i32, i1} %t, 0
-  %obit = extractvalue {i32, i1} %t, 1
-  br i1 %obit, label %carry, label %normal
-
-normal:
-  %t1 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @ok, i32 0, i32 0), i32 %sum ) nounwind
-  ret i1 true
-
-carry:
-  %t2 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @no, i32 0, i32 0) ) nounwind
-  ret i1 false
-}
-
-define i1 @func4() nounwind {
-entry:
-  %t = call {i32, i1} @llvm.uadd.with.overflow.i32(i32 0, i32 0)
-  %sum = extractvalue {i32, i1} %t, 0
-  %obit = extractvalue {i32, i1} %t, 1
-  br i1 %obit, label %carry, label %normal
-
-normal:
-  %t1 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @ok, i32 0, i32 0), i32 %sum ) nounwind
-  ret i1 true
-
-carry:
-  %t2 = tail call i32 (i8*, ...)* @printf( i8* getelementptr ([4 x i8]* @no, i32 0, i32 0) ) nounwind
-  ret i1 false
-}
-
-declare i32 @printf(i8*, ...) nounwind
-declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
-declare {i32, i1} @llvm.uadd.with.overflow.i32(i32, i32)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/add.ll b/libclamav/c++/llvm/test/CodeGen/X86/add.ll
new file mode 100644
index 0000000..3991a68
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/add.ll
@@ -0,0 +1,94 @@
+; RUN: llc < %s -march=x86 | FileCheck %s -check-prefix=X32
+; RUN: llc < %s -march=x86-64 | FileCheck %s -check-prefix=X64
+
+; The immediate can be encoded in a smaller way if the
+; instruction is a sub instead of an add.
+
+define i32 @test1(i32 inreg %a) nounwind {
+  %b = add i32 %a, 128
+  ret i32 %b
+; X32: subl	$-128, %eax
+; X64: subl $-128, 
+}
+define i64 @test2(i64 inreg %a) nounwind {
+  %b = add i64 %a, 2147483648
+  ret i64 %b
+; X32: addl	$-2147483648, %eax
+; X64: subq	$-2147483648,
+}
+define i64 @test3(i64 inreg %a) nounwind {
+  %b = add i64 %a, 128
+  ret i64 %b
+  
+; X32: addl $128, %eax
+; X64: subq	$-128,
+}
+
+define i1 @test4(i32 %v1, i32 %v2, i32* %X) nounwind {
+entry:
+  %t = call {i32, i1} @llvm.sadd.with.overflow.i32(i32 %v1, i32 %v2)
+  %sum = extractvalue {i32, i1} %t, 0
+  %obit = extractvalue {i32, i1} %t, 1
+  br i1 %obit, label %overflow, label %normal
+
+normal:
+  store i32 0, i32* %X
+  br label %overflow
+
+overflow:
+  ret i1 false
+  
+; X32: test4:
+; X32: addl
+; X32-NEXT: jo
+
+; X64:        test4:
+; X64:          addl	%esi, %edi
+; X64-NEXT:	jo
+}
+
+define i1 @test5(i32 %v1, i32 %v2, i32* %X) nounwind {
+entry:
+  %t = call {i32, i1} @llvm.uadd.with.overflow.i32(i32 %v1, i32 %v2)
+  %sum = extractvalue {i32, i1} %t, 0
+  %obit = extractvalue {i32, i1} %t, 1
+  br i1 %obit, label %carry, label %normal
+
+normal:
+  store i32 0, i32* %X
+  br label %carry
+
+carry:
+  ret i1 false
+
+; X32: test5:
+; X32: addl
+; X32-NEXT: jb
+
+; X64:        test5:
+; X64:          addl	%esi, %edi
+; X64-NEXT:	jb
+}
+
+declare {i32, i1} @llvm.sadd.with.overflow.i32(i32, i32)
+declare {i32, i1} @llvm.uadd.with.overflow.i32(i32, i32)
+
+
+define i64 @test6(i64 %A, i32 %B) nounwind {
+        %tmp12 = zext i32 %B to i64             ; <i64> [#uses=1]
+        %tmp3 = shl i64 %tmp12, 32              ; <i64> [#uses=1]
+        %tmp5 = add i64 %tmp3, %A               ; <i64> [#uses=1]
+        ret i64 %tmp5
+
+; X32: test6:
+; X32:	    movl 12(%esp), %edx
+; X32-NEXT: addl 8(%esp), %edx
+; X32-NEXT: movl 4(%esp), %eax
+; X32-NEXT: ret
+        
+; X64: test6:
+; X64:	shlq	$32, %rsi
+; X64:	leaq	(%rsi,%rdi), %rax
+; X64:	ret
+}
+
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/aliases.ll b/libclamav/c++/llvm/test/CodeGen/X86/aliases.ll
index 0b26859..3020eb3 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/aliases.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/aliases.ll
@@ -1,5 +1,6 @@
 ; RUN: llc < %s -mtriple=i686-pc-linux-gnu -asm-verbose=false -o %t
-; RUN: grep set %t   | count 7
+; RUN: grep { = } %t   | count 7
+; RUN: grep set %t   | count 16
 ; RUN: grep globl %t | count 6
 ; RUN: grep weak %t  | count 1
 ; RUN: grep hidden %t | count 1
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/call-push.ll b/libclamav/c++/llvm/test/CodeGen/X86/call-push.ll
index 7bae5cd..02cbccc 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/call-push.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/call-push.ll
@@ -1,9 +1,14 @@
-; RUN: llc < %s -march=x86 -disable-fp-elim | grep subl | count 1
+; RUN: llc < %s -mtriple=i386-apple-darwin -disable-fp-elim | FileCheck %s
 
         %struct.decode_t = type { i8, i8, i8, i8, i16, i8, i8, %struct.range_t** }
         %struct.range_t = type { float, float, i32, i32, i32, [0 x i8] }
 
-define i32 @decode_byte(%struct.decode_t* %decode) {
+define i32 @decode_byte(%struct.decode_t* %decode) nounwind {
+; CHECK: decode_byte:
+; CHECK: pushl
+; CHECK: popl
+; CHECK: popl
+; CHECK: jmp
 entry:
         %tmp2 = getelementptr %struct.decode_t* %decode, i32 0, i32 4           ; <i16*> [#uses=1]
         %tmp23 = bitcast i16* %tmp2 to i32*             ; <i32*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/codegen-dce.ll b/libclamav/c++/llvm/test/CodeGen/X86/codegen-dce.ll
new file mode 100644
index 0000000..d83efaf
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/codegen-dce.ll
@@ -0,0 +1,43 @@
+; RUN: llc < %s -march=x86 -stats |& grep {codegen-dce} | grep {Number of dead instructions deleted}
+
+	%struct.anon = type { [3 x double], double, %struct.node*, [64 x %struct.bnode*], [64 x %struct.bnode*] }
+	%struct.bnode = type { i16, double, [3 x double], i32, i32, [3 x double], [3 x double], [3 x double], double, %struct.bnode*, %struct.bnode* }
+	%struct.node = type { i16, double, [3 x double], i32, i32 }
+
+define i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
+entry:
+	%0 = malloc %struct.anon		; <%struct.anon*> [#uses=2]
+	%1 = getelementptr %struct.anon* %0, i32 0, i32 2		; <%struct.node**> [#uses=1]
+	br label %bb14.i
+
+bb14.i:		; preds = %bb14.i, %entry
+	%i8.0.reg2mem.0.i = phi i32 [ 0, %entry ], [ %2, %bb14.i ]		; <i32> [#uses=1]
+	%2 = add i32 %i8.0.reg2mem.0.i, 1		; <i32> [#uses=2]
+	%exitcond74.i = icmp eq i32 %2, 32		; <i1> [#uses=1]
+	br i1 %exitcond74.i, label %bb32.i, label %bb14.i
+
+bb32.i:		; preds = %bb32.i, %bb14.i
+	%tmp.0.reg2mem.0.i = phi i32 [ %indvar.next63.i, %bb32.i ], [ 0, %bb14.i ]		; <i32> [#uses=1]
+	%indvar.next63.i = add i32 %tmp.0.reg2mem.0.i, 1		; <i32> [#uses=2]
+	%exitcond64.i = icmp eq i32 %indvar.next63.i, 64		; <i1> [#uses=1]
+	br i1 %exitcond64.i, label %bb47.loopexit.i, label %bb32.i
+
+bb.i.i:		; preds = %bb47.loopexit.i
+	unreachable
+
+stepsystem.exit.i:		; preds = %bb47.loopexit.i
+	store %struct.node* null, %struct.node** %1, align 4
+	br label %bb.i6.i
+
+bb.i6.i:		; preds = %bb.i6.i, %stepsystem.exit.i
+	br i1 false, label %bb107.i.i, label %bb.i6.i
+
+bb107.i.i:		; preds = %bb107.i.i, %bb.i6.i
+	%q_addr.0.i.i.in = phi %struct.bnode** [ null, %bb107.i.i ], [ %3, %bb.i6.i ]		; <%struct.bnode**> [#uses=0]
+	br label %bb107.i.i
+
+bb47.loopexit.i:		; preds = %bb32.i
+	%3 = getelementptr %struct.anon* %0, i32 0, i32 4, i32 0		; <%struct.bnode**> [#uses=1]
+	%4 = icmp eq %struct.node* null, null		; <i1> [#uses=1]
+	br i1 %4, label %stepsystem.exit.i, label %bb.i.i
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll b/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
index 337f1b2..8e38fe3 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/convert-2-addr-3-addr-inc64.ll
@@ -1,19 +1,20 @@
 ; RUN: llc < %s -march=x86-64 -o %t -stats -info-output-file - | \
-; RUN:   grep {asm-printer} | grep {Number of machine instrs printed} | grep 5
+; RUN:   grep {asm-printer} | grep {Number of machine instrs printed} | grep 10
 ; RUN: grep {leal	1(\%rsi),} %t
 
-define fastcc zeroext i8 @fullGtU(i32 %i1, i32 %i2) nounwind optsize {
+define fastcc zeroext i8 @fullGtU(i32 %i1, i32 %i2, i8* %ptr) nounwind optsize {
 entry:
   %0 = add i32 %i2, 1           ; <i32> [#uses=1]
   %1 = sext i32 %0 to i64               ; <i64> [#uses=1]
-  %2 = getelementptr i8* null, i64 %1           ; <i8*> [#uses=1]
+  %2 = getelementptr i8* %ptr, i64 %1           ; <i8*> [#uses=1]
   %3 = load i8* %2, align 1             ; <i8> [#uses=1]
   %4 = icmp eq i8 0, %3         ; <i1> [#uses=1]
   br i1 %4, label %bb3, label %bb34
 
 bb3:            ; preds = %entry
   %5 = add i32 %i2, 4           ; <i32> [#uses=0]
-  ret i8 0
+  %6 = trunc i32 %5 to i8
+  ret i8 %6
 
 bb34:           ; preds = %entry
   ret i8 0
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/dllexport.ll b/libclamav/c++/llvm/test/CodeGen/X86/dllexport.ll
new file mode 100644
index 0000000..2c699bf
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/dllexport.ll
@@ -0,0 +1,12 @@
+; RUN: llc < %s | FileCheck %s
+; PR2936
+
+target triple = "i386-mingw32"
+
+define dllexport x86_fastcallcc i32 @foo() nounwind  {
+entry:
+	ret i32 0
+}
+
+; CHECK: .section .drectve
+; CHECK: -export:@foo at 0
\ No newline at end of file
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/fastcall-correct-mangling.ll b/libclamav/c++/llvm/test/CodeGen/X86/fastcall-correct-mangling.ll
index 2b48f5f..33b18bb 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/fastcall-correct-mangling.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/fastcall-correct-mangling.ll
@@ -1,9 +1,9 @@
-; RUN: llc < %s -mtriple=i386-unknown-mingw32 | \
-; RUN:   grep {@12}
+; RUN: llc < %s -mtriple=i386-unknown-mingw32 | FileCheck %s
 
 ; Check that a fastcall function gets correct mangling
 
 define x86_fastcallcc void @func(i64 %X, i8 %Y, i8 %G, i16 %Z) {
+; CHECK: @func at 20:
         ret void
 }
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/full-lsr.ll b/libclamav/c++/llvm/test/CodeGen/X86/full-lsr.ll
index 3bd58b6..ff9b1b0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/full-lsr.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/full-lsr.ll
@@ -1,12 +1,7 @@
 ; RUN: llc < %s -march=x86 >%t
 
-; TODO: Enhance full lsr mode to get this:
-; RUNX: grep {addl	\\\$4,} %t | count 3
-; RUNX: not grep {,%} %t
-
-; For now, it should find this, which is still pretty good:
-; RUN: not grep {addl	\\\$4,} %t
-; RUN: grep {,%} %t | count 6
+; RUN: grep {addl	\\\$4,} %t | count 3
+; RUN: not grep {,%} %t
 
 define void @foo(float* nocapture %A, float* nocapture %B, float* nocapture %C, i32 %N) nounwind {
 entry:
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/ins_subreg_coalesce-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/ins_subreg_coalesce-3.ll
index e443085..627edc5 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/ins_subreg_coalesce-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/ins_subreg_coalesce-3.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86-64 | grep mov | count 11
+; RUN: llc < %s -march=x86-64 | grep mov | count 5
 
 	%struct.COMPOSITE = type { i8, i16, i16 }
 	%struct.FILE = type { i8*, i32, i32, i16, i16, %struct.__sbuf, i32, i8*, i32 (i8*)*, i32 (i8*, i8*, i32)*, i64 (i8*, i64, i32)*, i32 (i8*, i8*, i32)*, %struct.__sbuf, %struct.__sFILEX*, i32, [3 x i8], [1 x i8], %struct.__sbuf, i32, i64 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/iv-users-in-other-loops.ll b/libclamav/c++/llvm/test/CodeGen/X86/iv-users-in-other-loops.ll
index c695c29..408fb20 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/iv-users-in-other-loops.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/iv-users-in-other-loops.ll
@@ -1,11 +1,11 @@
 ; RUN: llc < %s -march=x86-64 -o %t
-; RUN: grep inc %t | count 1
+; RUN: not grep inc %t
 ; RUN: grep dec %t | count 2
 ; RUN: grep addq %t | count 13
 ; RUN: not grep addb %t
-; RUN: grep leaq %t | count 9
-; RUN: grep leal %t | count 3
-; RUN: grep movq %t | count 5
+; RUN: not grep leaq %t
+; RUN: not grep leal %t
+; RUN: not grep movq %t
 
 ; IV users in each of the loops from other loops shouldn't cause LSR
 ; to insert new induction variables. Previously it would create a
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-2.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-2.ll
index 30b5114..b546462 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-2.ll
@@ -1,11 +1,24 @@
-; RUN: llc < %s -march=x86 -relocation-model=pic | \
-; RUN:   grep {, 4} | count 1
-; RUN: llc < %s -march=x86 | not grep lea
+; RUN: llc < %s -march=x86 -relocation-model=pic | FileCheck %s -check-prefix=PIC
+; RUN: llc < %s -march=x86 -relocation-model=static | FileCheck %s -check-prefix=STATIC
 ;
 ; Make sure the common loop invariant A is hoisted up to preheader,
 ; since too many registers are needed to subsume it into the addressing modes.
 ; It's safe to sink A in when it's not pic.
 
+; PIC:  align
+; PIC:  movl  $4, -4([[REG:%e[a-z]+]])
+; PIC:  movl  $5, ([[REG]])
+; PIC:  addl  $4, [[REG]]
+; PIC:  decl  {{%e[[a-z]+}}
+; PIC:  jne
+
+; STATIC: align
+; STATIC: movl  $4, -4(%ecx)
+; STATIC: movl  $5, (%ecx)
+; STATIC: addl  $4, %ecx
+; STATIC: decl  %eax
+; STATIC: jne
+
 @A = global [16 x [16 x i32]] zeroinitializer, align 32		; <[16 x [16 x i32]]*> [#uses=2]
 
 define void @test(i32 %row, i32 %N.in) nounwind {
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-3.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-3.ll
index 70c9134..b1c9fb9 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce-3.ll
@@ -1,8 +1,11 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin -relocation-model=dynamic-no-pic | \
-; RUN:   grep {A+} | count 2
-;
-; Make sure the common loop invariant A is not hoisted up to preheader,
-; since it can be subsumed it into the addressing modes.
+; RUN: llc < %s -mtriple=i386-apple-darwin -relocation-model=dynamic-no-pic | FileCheck %s
+
+; CHECK: align
+; CHECK: movl  $4, -4(%ecx)
+; CHECK: movl  $5, (%ecx)
+; CHECK: addl  $4, %ecx
+; CHECK: decl  %eax
+; CHECK: jne
 
 @A = global [16 x [16 x i32]] zeroinitializer, align 32		; <[16 x [16 x i32]]*> [#uses=2]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce.ll
index 4cb56ca..42c6ac4 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce.ll
@@ -1,8 +1,11 @@
-; RUN: llc < %s -march=x86 -relocation-model=static | \
-; RUN:   grep {A+} | count 2
-;
-; Make sure the common loop invariant A is not hoisted up to preheader,
-; since it can be subsumed into the addressing mode in all uses.
+; RUN: llc < %s -march=x86 -relocation-model=static | FileCheck %s
+
+; CHECK: align
+; CHECK: movl  $4, -4(%ecx)
+; CHECK: movl  $5, (%ecx)
+; CHECK: addl  $4, %ecx
+; CHECK: decl  %eax
+; CHECK: jne
 
 @A = internal global [16 x [16 x i32]] zeroinitializer, align 32		; <[16 x [16 x i32]]*> [#uses=2]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce4.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce4.ll
index 07e46ec..6c0eb8c 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce4.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce4.ll
@@ -1,5 +1,19 @@
-; RUN: llc < %s -march=x86 | grep cmp | grep 64
-; RUN: llc < %s -march=x86 | not grep inc
+; RUN: llc < %s -march=x86 -relocation-model=static -mtriple=i686-apple-darwin | FileCheck %s -check-prefix=STATIC
+; RUN: llc < %s -march=x86 -relocation-model=pic | FileCheck %s -check-prefix=PIC
+
+; By starting the IV at -64 instead of 0, a cmp is eliminated,
+; as the flags from the add can be used directly.
+
+; STATIC: movl    $-64, %ecx
+
+; STATIC: movl    %eax, _state+76(%ecx)
+; STATIC: addl    $16, %ecx
+; STATIC: jne
+
+; In PIC mode the symbol can't be folded, so the change-compare-stride
+; trick applies.
+
+; PIC: cmpl $64
 
 @state = external global [0 x i32]		; <[0 x i32]*> [#uses=4]
 @S = external global [0 x i32]		; <[0 x i32]*> [#uses=4]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce8.ll b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce8.ll
index e14cd8a..6b2247d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce8.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/loop-strength-reduce8.ll
@@ -1,4 +1,10 @@
-; RUN: llc < %s -mtriple=i386-apple-darwin | grep leal | not grep 16
+; RUN: llc < %s -mtriple=i386-apple-darwin | FileCheck %s
+
+; CHECK: leal 16(%eax), %edx
+; CHECK: align
+; CHECK: addl    $4, %edx
+; CHECK: decl    %ecx
+; CHECK: jne     LBB1_2
 
 	%struct.CUMULATIVE_ARGS = type { i32, i32, i32, i32, i32, i32, i32 }
 	%struct.bitmap_element = type { %struct.bitmap_element*, %struct.bitmap_element*, i32, [2 x i64] }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/lsr-reuse.ll b/libclamav/c++/llvm/test/CodeGen/X86/lsr-reuse.ll
new file mode 100644
index 0000000..7f2b8cc
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/lsr-reuse.ll
@@ -0,0 +1,386 @@
+; RUN: llc < %s -march=x86-64 -O3 | FileCheck %s
+target datalayout = "e-p:64:64:64"
+target triple = "x86_64-unknown-unknown"
+
+; Full strength reduction reduces register pressure from 5 to 4 here.
+; Instruction selection should use the FLAGS value from the dec for
+; the branch. Scheduling should push the adds upwards.
+
+; CHECK: full_me_0:
+; CHECK: movsd   (%rsi), %xmm0
+; CHECK: addq    $8, %rsi
+; CHECK: mulsd   (%rdx), %xmm0
+; CHECK: addq    $8, %rdx
+; CHECK: movsd   %xmm0, (%rdi)
+; CHECK: addq    $8, %rdi
+; CHECK: decq    %rcx
+; CHECK: jne
+
+define void @full_me_0(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; Mostly-full strength reduction means we do full strength reduction on all
+; except for the offsets.
+;
+; Given a choice between constant offsets -2048 and 2048, choose the negative
+; value, because at boundary conditions it has a smaller encoding.
+; TODO: That's an over-general heuristic. It would be better for the target
+; to indicate what the encoding cost would be. Then using a 2048 offset
+; would be better on x86-64, since the start value would be 0 instead of
+; 2048.
+
+; CHECK: mostly_full_me_0:
+; CHECK: movsd   -2048(%rsi), %xmm0
+; CHECK: mulsd   -2048(%rdx), %xmm0
+; CHECK: movsd   %xmm0, -2048(%rdi)
+; CHECK: movsd   (%rsi), %xmm0
+; CHECK: addq    $8, %rsi
+; CHECK: divsd   (%rdx), %xmm0
+; CHECK: addq    $8, %rdx
+; CHECK: movsd   %xmm0, (%rdi)
+; CHECK: addq    $8, %rdi
+; CHECK: decq    %rcx
+; CHECK: jne
+
+define void @mostly_full_me_0(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %j = add i64 %i, 256
+  %Aj = getelementptr inbounds double* %A, i64 %j
+  %Bj = getelementptr inbounds double* %B, i64 %j
+  %Cj = getelementptr inbounds double* %C, i64 %j
+  %t3 = load double* %Bj
+  %t4 = load double* %Cj
+  %o = fdiv double %t3, %t4
+  store double %o, double* %Aj
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; A minor variation on mostly_full_me_0.
+; Prefer to start the indvar at 0.
+
+; CHECK: mostly_full_me_1:
+; CHECK: movsd   (%rsi), %xmm0
+; CHECK: mulsd   (%rdx), %xmm0
+; CHECK: movsd   %xmm0, (%rdi)
+; CHECK: movsd   -2048(%rsi), %xmm0
+; CHECK: addq    $8, %rsi
+; CHECK: divsd   -2048(%rdx), %xmm0
+; CHECK: addq    $8, %rdx
+; CHECK: movsd   %xmm0, -2048(%rdi)
+; CHECK: addq    $8, %rdi
+; CHECK: decq    %rcx
+; CHECK: jne
+
+define void @mostly_full_me_1(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %j = sub i64 %i, 256
+  %Aj = getelementptr inbounds double* %A, i64 %j
+  %Bj = getelementptr inbounds double* %B, i64 %j
+  %Cj = getelementptr inbounds double* %C, i64 %j
+  %t3 = load double* %Bj
+  %t4 = load double* %Cj
+  %o = fdiv double %t3, %t4
+  store double %o, double* %Aj
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; A slightly less minor variation on mostly_full_me_0.
+
+; CHECK: mostly_full_me_2:
+; CHECK: movsd   (%rsi), %xmm0
+; CHECK: mulsd   (%rdx), %xmm0
+; CHECK: movsd   %xmm0, (%rdi)
+; CHECK: movsd   -4096(%rsi), %xmm0
+; CHECK: addq    $8, %rsi
+; CHECK: divsd   -4096(%rdx), %xmm0
+; CHECK: addq    $8, %rdx
+; CHECK: movsd   %xmm0, -4096(%rdi)
+; CHECK: addq    $8, %rdi
+; CHECK: decq    %rcx
+; CHECK: jne
+
+define void @mostly_full_me_2(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %k = add i64 %i, 256
+  %Ak = getelementptr inbounds double* %A, i64 %k
+  %Bk = getelementptr inbounds double* %B, i64 %k
+  %Ck = getelementptr inbounds double* %C, i64 %k
+  %t1 = load double* %Bk
+  %t2 = load double* %Ck
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ak
+  %j = sub i64 %i, 256
+  %Aj = getelementptr inbounds double* %A, i64 %j
+  %Bj = getelementptr inbounds double* %B, i64 %j
+  %Cj = getelementptr inbounds double* %C, i64 %j
+  %t3 = load double* %Bj
+  %t4 = load double* %Cj
+  %o = fdiv double %t3, %t4
+  store double %o, double* %Aj
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; In this test, the counting IV exit value is used, so full strength reduction
+; would not reduce register pressure. IndVarSimplify ought to simplify such
+; cases away, but it's useful here to verify that LSR's register pressure
+; heuristics are working as expected.
+
+; CHECK: count_me_0:
+; CHECK: movsd   (%rsi,%rax,8), %xmm0
+; CHECK: mulsd   (%rdx,%rax,8), %xmm0
+; CHECK: movsd   %xmm0, (%rdi,%rax,8)
+; CHECK: incq    %rax
+; CHECK: cmpq    %rax, %rcx
+; CHECK: jne
+
+define i64 @count_me_0(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  %q = phi i64 [ 0, %entry ], [ %i.next, %loop ]
+  ret i64 %q
+}
+
+; In this test, the trip count value is used, so full strength reduction
+; would not reduce register pressure.
+; (though it would reduce register pressure inside the loop...)
+
+; CHECK: count_me_1:
+; CHECK: movsd   (%rsi,%rax,8), %xmm0
+; CHECK: mulsd   (%rdx,%rax,8), %xmm0
+; CHECK: movsd   %xmm0, (%rdi,%rax,8)
+; CHECK: incq    %rax
+; CHECK: cmpq    %rax, %rcx
+; CHECK: jne
+
+define i64 @count_me_1(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  %q = phi i64 [ 0, %entry ], [ %n, %loop ]
+  ret i64 %q
+}
+
+; Full strength reduction doesn't save any registers here because the
+; loop tripcount is a constant.
+
+; CHECK: count_me_2:
+; CHECK: movl    $10, %eax
+; CHECK: align
+; CHECK: BB7_1:
+; CHECK: movsd   -40(%rdi,%rax,8), %xmm0
+; CHECK: addsd   -40(%rsi,%rax,8), %xmm0
+; CHECK: movsd   %xmm0, -40(%rdx,%rax,8)
+; CHECK: movsd   (%rdi,%rax,8), %xmm0
+; CHECK: subsd   (%rsi,%rax,8), %xmm0
+; CHECK: movsd   %xmm0, (%rdx,%rax,8)
+; CHECK: incq    %rax
+; CHECK: cmpq    $5010, %rax
+; CHECK: jne
+
+define void @count_me_2(double* nocapture %A, double* nocapture %B, double* nocapture %C) nounwind {
+entry:
+  br label %loop
+
+loop:
+  %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
+  %i5 = add i64 %i, 5
+  %Ai = getelementptr double* %A, i64 %i5
+  %t2 = load double* %Ai
+  %Bi = getelementptr double* %B, i64 %i5
+  %t4 = load double* %Bi
+  %t5 = fadd double %t2, %t4
+  %Ci = getelementptr double* %C, i64 %i5
+  store double %t5, double* %Ci
+  %i10 = add i64 %i, 10
+  %Ai10 = getelementptr double* %A, i64 %i10
+  %t9 = load double* %Ai10
+  %Bi10 = getelementptr double* %B, i64 %i10
+  %t11 = load double* %Bi10
+  %t12 = fsub double %t9, %t11
+  %Ci10 = getelementptr double* %C, i64 %i10
+  store double %t12, double* %Ci10
+  %i.next = add i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, 5000
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; This should be fully strength-reduced to reduce register pressure.
+
+; CHECK: full_me_1:
+; CHECK: align
+; CHECK: BB8_1:
+; CHECK: movsd   (%rdi), %xmm0
+; CHECK: addsd   (%rsi), %xmm0
+; CHECK: movsd   %xmm0, (%rdx)
+; CHECK: movsd   40(%rdi), %xmm0
+; CHECK: addq    $8, %rdi
+; CHECK: subsd   40(%rsi), %xmm0
+; CHECK: addq    $8, %rsi
+; CHECK: movsd   %xmm0, 40(%rdx)
+; CHECK: addq    $8, %rdx
+; CHECK: decq    %rcx
+; CHECK: jne
+
+define void @full_me_1(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  br label %loop
+
+loop:
+  %i = phi i64 [ 0, %entry ], [ %i.next, %loop ]
+  %i5 = add i64 %i, 5
+  %Ai = getelementptr double* %A, i64 %i5
+  %t2 = load double* %Ai
+  %Bi = getelementptr double* %B, i64 %i5
+  %t4 = load double* %Bi
+  %t5 = fadd double %t2, %t4
+  %Ci = getelementptr double* %C, i64 %i5
+  store double %t5, double* %Ci
+  %i10 = add i64 %i, 10
+  %Ai10 = getelementptr double* %A, i64 %i10
+  %t9 = load double* %Ai10
+  %Bi10 = getelementptr double* %B, i64 %i10
+  %t11 = load double* %Bi10
+  %t12 = fsub double %t9, %t11
+  %Ci10 = getelementptr double* %C, i64 %i10
+  store double %t12, double* %Ci10
+  %i.next = add i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
+
+; This is a variation on full_me_0 in which the 0,+,1 induction variable
+; has a non-address use, pinning that value in a register.
+
+; CHECK: count_me_3:
+; CHECK: call
+; CHECK: movsd   (%r15,%r13,8), %xmm0
+; CHECK: mulsd   (%r14,%r13,8), %xmm0
+; CHECK: movsd   %xmm0, (%r12,%r13,8)
+; CHECK: incq    %r13
+; CHECK: cmpq    %r13, %rbx
+; CHECK: jne
+
+declare void @use(i64)
+
+define void @count_me_3(double* nocapture %A, double* nocapture %B, double* nocapture %C, i64 %n) nounwind {
+entry:
+  %t0 = icmp sgt i64 %n, 0
+  br i1 %t0, label %loop, label %return
+
+loop:
+  %i = phi i64 [ %i.next, %loop ], [ 0, %entry ]
+  call void @use(i64 %i)
+  %Ai = getelementptr inbounds double* %A, i64 %i
+  %Bi = getelementptr inbounds double* %B, i64 %i
+  %Ci = getelementptr inbounds double* %C, i64 %i
+  %t1 = load double* %Bi
+  %t2 = load double* %Ci
+  %m = fmul double %t1, %t2
+  store double %m, double* %Ai
+  %i.next = add nsw i64 %i, 1
+  %exitcond = icmp eq i64 %i.next, %n
+  br i1 %exitcond, label %return, label %loop
+
+return:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/masked-iv-safe.ll b/libclamav/c++/llvm/test/CodeGen/X86/masked-iv-safe.ll
index bc493bd..0b4d73a 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/masked-iv-safe.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/masked-iv-safe.ll
@@ -169,7 +169,7 @@ loop:
 	%indvar.i24 = and i64 %indvar, 16777215
 	%t3 = getelementptr double* %d, i64 %indvar.i24
 	%t4 = load double* %t3
-	%t5 = fmul double %t4, 2.3
+	%t5 = fdiv double %t4, 2.3
 	store double %t5, double* %t3
 	%t6 = getelementptr double* %d, i64 %indvar
 	%t7 = load double* %t6
@@ -199,7 +199,7 @@ loop:
 	%indvar.i24 = ashr i64 %s1, 24
 	%t3 = getelementptr double* %d, i64 %indvar.i24
 	%t4 = load double* %t3
-	%t5 = fmul double %t4, 2.3
+	%t5 = fdiv double %t4, 2.3
 	store double %t5, double* %t3
 	%t6 = getelementptr double* %d, i64 %indvar
 	%t7 = load double* %t6
@@ -229,7 +229,7 @@ loop:
 	%indvar.i24 = ashr i64 %s1, 24
 	%t3 = getelementptr double* %d, i64 %indvar.i24
 	%t4 = load double* %t3
-	%t5 = fmul double %t4, 2.3
+	%t5 = fdiv double %t4, 2.3
 	store double %t5, double* %t3
 	%t6 = getelementptr double* %d, i64 %indvar
 	%t7 = load double* %t6
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/nancvt.ll b/libclamav/c++/llvm/test/CodeGen/X86/nancvt.ll
index c30767c..3b76b6d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/nancvt.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/nancvt.ll
@@ -17,6 +17,8 @@ target triple = "i686-apple-darwin8"
 @.str = internal constant [10 x i8] c"%08x%08x\0A\00"		; <[10 x i8]*> [#uses=2]
 @.str1 = internal constant [6 x i8] c"%08x\0A\00"		; <[6 x i8]*> [#uses=2]
 
+ at var = external global i32
+
 define i32 @main() {
 entry:
 	%retval = alloca i32, align 4		; <i32*> [#uses=1]
@@ -51,7 +53,8 @@ bb:		; preds = %bb23
 	%tmp17 = ashr i64 %tmp16, %.cast		; <i64> [#uses=1]
 	%tmp1718 = trunc i64 %tmp17 to i32		; <i32> [#uses=1]
 	%tmp19 = getelementptr [10 x i8]* @.str, i32 0, i32 0		; <i8*> [#uses=1]
-	%tmp20 = call i32 (i8*, ...)* @printf( i8* %tmp19, i32 %tmp1718, i32 %tmp13 )		; <i32> [#uses=0]
+	volatile store i32 %tmp1718, i32* @var
+	volatile store i32 %tmp13, i32* @var
 	%tmp21 = load i32* %i, align 4		; <i32> [#uses=1]
 	%tmp22 = add i32 %tmp21, 1		; <i32> [#uses=1]
 	store i32 %tmp22, i32* %i, align 4
@@ -84,7 +87,7 @@ bb28:		; preds = %bb46
 	%tmp3940 = bitcast float* %tmp39 to i32*		; <i32*> [#uses=1]
 	%tmp41 = load i32* %tmp3940, align 4		; <i32> [#uses=1]
 	%tmp42 = getelementptr [6 x i8]* @.str1, i32 0, i32 0		; <i8*> [#uses=1]
-	%tmp43 = call i32 (i8*, ...)* @printf( i8* %tmp42, i32 %tmp41 )		; <i32> [#uses=0]
+	volatile store i32 %tmp41, i32* @var
 	%tmp44 = load i32* %i, align 4		; <i32> [#uses=1]
 	%tmp45 = add i32 %tmp44, 1		; <i32> [#uses=1]
 	store i32 %tmp45, i32* %i, align 4
@@ -125,7 +128,8 @@ bb52:		; preds = %bb78
 	%tmp72 = ashr i64 %tmp70, %.cast71		; <i64> [#uses=1]
 	%tmp7273 = trunc i64 %tmp72 to i32		; <i32> [#uses=1]
 	%tmp74 = getelementptr [10 x i8]* @.str, i32 0, i32 0		; <i8*> [#uses=1]
-	%tmp75 = call i32 (i8*, ...)* @printf( i8* %tmp74, i32 %tmp7273, i32 %tmp66 )		; <i32> [#uses=0]
+	volatile store i32 %tmp7273, i32* @var
+	volatile store i32 %tmp66, i32* @var
 	%tmp76 = load i32* %i, align 4		; <i32> [#uses=1]
 	%tmp77 = add i32 %tmp76, 1		; <i32> [#uses=1]
 	store i32 %tmp77, i32* %i, align 4
@@ -158,7 +162,7 @@ bb84:		; preds = %bb101
 	%tmp9495 = bitcast float* %tmp94 to i32*		; <i32*> [#uses=1]
 	%tmp96 = load i32* %tmp9495, align 4		; <i32> [#uses=1]
 	%tmp97 = getelementptr [6 x i8]* @.str1, i32 0, i32 0		; <i8*> [#uses=1]
-	%tmp98 = call i32 (i8*, ...)* @printf( i8* %tmp97, i32 %tmp96 )		; <i32> [#uses=0]
+	volatile store i32 %tmp96, i32* @var
 	%tmp99 = load i32* %i, align 4		; <i32> [#uses=1]
 	%tmp100 = add i32 %tmp99, 1		; <i32> [#uses=1]
 	store i32 %tmp100, i32* %i, align 4
@@ -178,5 +182,3 @@ return:		; preds = %bb106
 	%retval107 = load i32* %retval		; <i32> [#uses=1]
 	ret i32 %retval107
 }
-
-declare i32 @printf(i8*, ...)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/personality.ll b/libclamav/c++/llvm/test/CodeGen/X86/personality.ll
index 5acf04c..ce57e8f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/personality.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/personality.ll
@@ -39,7 +39,7 @@ declare void @__gxx_personality_v0()
 declare void @__cxa_end_catch()
 
 ; X64: Leh_frame_common_begin:
-; X64: .long	___gxx_personality_v0 at GOTPCREL+4
+; X64: .long	(___gxx_personality_v0 at GOTPCREL)+4
 
 ; X32: Leh_frame_common_begin:
 ; X32: .long	L___gxx_personality_v0$non_lazy_ptr-
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/phi-immediate-factoring.ll b/libclamav/c++/llvm/test/CodeGen/X86/phi-immediate-factoring.ll
similarity index 100%
rename from libclamav/c++/llvm/test/CodeGen/Generic/phi-immediate-factoring.ll
rename to libclamav/c++/llvm/test/CodeGen/X86/phi-immediate-factoring.ll
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll b/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll
index e5e2d4b..045841e 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/phys-reg-local-regalloc.ll
@@ -1,4 +1,6 @@
 ; RUN: llc < %s -march=x86 -mtriple=i386-apple-darwin9 -regalloc=local | FileCheck %s
+; RUN: llc -O0 < %s -march=x86 -mtriple=i386-apple-darwin9 -regalloc=local | FileCheck %s
+; CHECKed instructions should be the same with or without -O0.
 
 @.str = private constant [12 x i8] c"x + y = %i\0A\00", align 1 ; <[12 x i8]*> [#uses=1]
 
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pic.ll b/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
index e886ba0..d3c28a0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pic.ll
@@ -190,7 +190,7 @@ bb12:
 ; LINUX: .L8$pb:
 ; LINUX:   addl	$_GLOBAL_OFFSET_TABLE_+(.Lpicbaseref8-.L8$pb),
 ; LINUX:   addl	.LJTI8_0 at GOTOFF(
-; LINUX:   jmpl	*%ecx
+; LINUX:   jmpl	*
 
 ; LINUX: .LJTI8_0:
 ; LINUX:   .long	 .LBB8_2 at GOTOFF
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pr1505b.ll b/libclamav/c++/llvm/test/CodeGen/X86/pr1505b.ll
index 12736cd..6a08dae 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pr1505b.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pr1505b.ll
@@ -1,5 +1,5 @@
-; RUN: llc < %s -mcpu=i486 | grep fstpl | count 4
-; RUN: llc < %s -mcpu=i486 | grep fstps | count 3
+; RUN: llc < %s -mcpu=i486 | grep fstpl | count 5
+; RUN: llc < %s -mcpu=i486 | grep fstps | count 2
 ; PR1505
 
 target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64"
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pr3495.ll b/libclamav/c++/llvm/test/CodeGen/X86/pr3495.ll
index 1795970..e84a84f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pr3495.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pr3495.ll
@@ -1,8 +1,7 @@
 ; RUN: llc < %s -march=x86 -stats |& grep {Number of loads added} | grep 2
 ; RUN: llc < %s -march=x86 -stats |& grep {Number of register spills} | grep 1
-; RUN: llc < %s -march=x86 -stats |& grep {Number of machine instrs printed} | grep 38
+; RUN: llc < %s -march=x86 -stats |& grep {Number of machine instrs printed} | grep 34
 ; PR3495
-; The loop reversal kicks in once here, resulting in one fewer instruction.
 
 target triple = "i386-pc-linux-gnu"
 @x = external global [8 x i32], align 32		; <[8 x i32]*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pre-split8.ll b/libclamav/c++/llvm/test/CodeGen/X86/pre-split8.ll
index ea4b949..0684bd0 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pre-split8.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pre-split8.ll
@@ -20,7 +20,7 @@ bb:		; preds = %bb9.i, %entry
 
 bb9.i:		; preds = %bb
 	%2 = fsub double %.rle4, %0		; <double> [#uses=0]
-	%3 = tail call double @asin(double 0.000000e+00) nounwind readonly		; <double> [#uses=0]
+	%3 = tail call double @asin(double %.rle4) nounwind readonly		; <double> [#uses=0]
 	%4 = fmul double 0.000000e+00, %0		; <double> [#uses=1]
 	%5 = tail call double @tan(double 0.000000e+00) nounwind readonly		; <double> [#uses=0]
 	%6 = fmul double %4, 0.000000e+00		; <double> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/pre-split9.ll b/libclamav/c++/llvm/test/CodeGen/X86/pre-split9.ll
index c27d925..86dda33 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/pre-split9.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/pre-split9.ll
@@ -22,7 +22,7 @@ bb:		; preds = %bb9.i, %entry
 
 bb9.i:		; preds = %bb
 	%2 = fsub double %.rle4, %0		; <double> [#uses=0]
-	%3 = tail call double @asin(double 0.000000e+00) nounwind readonly		; <double> [#uses=0]
+	%3 = tail call double @asin(double %.rle4) nounwind readonly		; <double> [#uses=0]
 	%4 = tail call double @sin(double 0.000000e+00) nounwind readonly		; <double> [#uses=1]
 	%5 = fmul double %4, %0		; <double> [#uses=1]
 	%6 = tail call double @tan(double 0.000000e+00) nounwind readonly		; <double> [#uses=0]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/ptrtoint-constexpr.ll b/libclamav/c++/llvm/test/CodeGen/X86/ptrtoint-constexpr.ll
index 7e33e79..dd97905 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/ptrtoint-constexpr.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/ptrtoint-constexpr.ll
@@ -6,3 +6,9 @@
 ; CHECK: .quad	r&4294967295
 
 @r = global %union.x { i64 ptrtoint (%union.x* @r to i64) }, align 4
+
+; CHECK:	.globl x
+; CHECK: x:
+; CHECK: .quad	3
+
+ at x = global i64 mul (i64 3, i64 ptrtoint (i2* getelementptr (i2* null, i64 1) to i64))
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll b/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll
index fc67e44..77f320f 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/scalar_widen_div.ll
@@ -152,3 +152,32 @@ define <5 x i64> @test_ulong_rem(<5 x i64> %num, <5 x i64> %rem) {
   %rem.r = urem <5 x i64> %num, %rem
   ret <5 x i64>  %rem.r
 }
+
+define void @test_int_div(<3 x i32>* %dest, <3 x i32>* %old, i32 %n) {
+; CHECK: idivl
+; CHECK: idivl
+; CHECK: idivl
+; CHECK-NOT: idivl
+; CHECK: ret
+entry:
+  %cmp13 = icmp sgt i32 %n, 0
+  br i1 %cmp13, label %bb.nph, label %for.end
+
+bb.nph:  
+  br label %for.body
+
+for.body:
+  %i.014 = phi i32 [ 0, %bb.nph ], [ %inc, %for.body ] 
+  %arrayidx11 = getelementptr <3 x i32>* %dest, i32 %i.014
+  %tmp4 = load <3 x i32>* %arrayidx11 ; <<3 x i32>> [#uses=1]
+  %arrayidx7 = getelementptr inbounds <3 x i32>* %old, i32 %i.014
+  %tmp8 = load <3 x i32>* %arrayidx7 ; <<3 x i32>> [#uses=1]
+  %div = sdiv <3 x i32> %tmp4, %tmp8
+  store <3 x i32> %div, <3 x i32>* %arrayidx11
+  %inc = add nsw i32 %i.014, 1
+  %exitcond = icmp eq i32 %inc, %n 
+  br i1 %exitcond, label %for.end, label %for.body
+
+for.end:                                          ; preds = %for.body, %entry
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sext-i1.ll b/libclamav/c++/llvm/test/CodeGen/X86/sext-i1.ll
index b0771b0..21c418d 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sext-i1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sext-i1.ll
@@ -1,11 +1,17 @@
-; RUN: llc < %s -march=x86 | FileCheck %s
+; RUN: llc < %s -march=x86    | FileCheck %s -check-prefix=32
+; RUN: llc < %s -march=x86-64 | FileCheck %s -check-prefix=64
 ; rdar://7573216
+; PR6146
 
 define i32 @t1(i32 %x) nounwind readnone ssp {
 entry:
-; CHECK: t1:
-; CHECK: cmpl $1
-; CHECK: sbbl
+; 32: t1:
+; 32: cmpl $1
+; 32: sbbl
+
+; 64: t1:
+; 64: cmpl $1
+; 64: sbbl
   %0 = icmp eq i32 %x, 0
   %iftmp.0.0 = select i1 %0, i32 -1, i32 0
   ret i32 %iftmp.0.0
@@ -13,10 +19,45 @@ entry:
 
 define i32 @t2(i32 %x) nounwind readnone ssp {
 entry:
-; CHECK: t2:
-; CHECK: cmpl $1
-; CHECK: sbbl
+; 32: t2:
+; 32: cmpl $1
+; 32: sbbl
+
+; 64: t2:
+; 64: cmpl $1
+; 64: sbbl
   %0 = icmp eq i32 %x, 0
   %iftmp.0.0 = sext i1 %0 to i32
   ret i32 %iftmp.0.0
 }
+
+%struct.zbookmark = type { i64, i64 }
+%struct.zstream = type { }
+
+define i32 @t3() nounwind readonly {
+entry:
+; 32: t3:
+; 32: cmpl $1
+; 32: sbbl
+; 32: cmpl
+; 32: xorl
+
+; 64: t3:
+; 64: cmpl $1
+; 64: sbbq
+; 64: cmpq
+; 64: xorl
+  %not.tobool = icmp eq i32 undef, 0              ; <i1> [#uses=2]
+  %cond = sext i1 %not.tobool to i32              ; <i32> [#uses=1]
+  %conv = sext i1 %not.tobool to i64              ; <i64> [#uses=1]
+  %add13 = add i64 0, %conv                       ; <i64> [#uses=1]
+  %cmp = icmp ult i64 undef, %add13               ; <i1> [#uses=1]
+  br i1 %cmp, label %if.then, label %if.end
+
+if.then:                                          ; preds = %entry
+  br label %if.end
+
+if.end:                                           ; preds = %if.then, %entry
+  %xor27 = xor i32 undef, %cond                   ; <i32> [#uses=0]
+  ret i32 0
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
index 5550d26..b2af7c9 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/sse3.ll
@@ -63,10 +63,10 @@ define <8 x i16> @t4(<8 x i16> %A, <8 x i16> %B) nounwind {
 	ret <8 x i16> %tmp
 ; X64: t4:
 ; X64: 	pextrw	$7, %xmm0, %eax
-; X64: 	pshufhw	$100, %xmm0, %xmm1
-; X64: 	pinsrw	$1, %eax, %xmm1
+; X64: 	pshufhw	$100, %xmm0, %xmm2
+; X64: 	pinsrw	$1, %eax, %xmm2
 ; X64: 	pextrw	$1, %xmm0, %eax
-; X64: 	movaps	%xmm1, %xmm0
+; X64: 	movaps	%xmm2, %xmm0
 ; X64: 	pinsrw	$4, %eax, %xmm0
 ; X64: 	ret
 }
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll b/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
index 7d85818..42e7a39 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/stack-color-with-reg.ll
@@ -1,5 +1,5 @@
 ; RUN: llc < %s -mtriple=x86_64-apple-darwin10 -relocation-model=pic -disable-fp-elim -color-ss-with-regs -stats -info-output-file - > %t
-; RUN:   grep stackcoloring %t | grep "stack slot refs replaced with reg refs"  | grep 14
+; RUN:   grep stackcoloring %t | grep "stack slot refs replaced with reg refs"  | grep 8
 
 	type { [62 x %struct.Bitvec*] }		; type %0
 	type { i8* }		; type %1
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/stdcall.ll b/libclamav/c++/llvm/test/CodeGen/X86/stdcall.ll
new file mode 100644
index 0000000..70204bc
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/stdcall.ll
@@ -0,0 +1,16 @@
+; RUN: llc < %s | FileCheck %s
+; PR5851
+
+target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-f80:128:128-v64:64:64-v128:128:128-a0:0:64-f80:32:32-n8:16:32"
+target triple = "i386-mingw32"
+
+%0 = type { void (...)* }
+
+ at B = global %0 { void (...)* bitcast (void ()* @MyFunc to void (...)*) }, align 4
+; CHECK: _B:
+; CHECK: .long _MyFunc at 0
+
+define internal x86_stdcallcc void @MyFunc() nounwind {
+entry:
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/Generic/switch-crit-edge-constant.ll b/libclamav/c++/llvm/test/CodeGen/X86/switch-crit-edge-constant.ll
similarity index 100%
rename from libclamav/c++/llvm/test/CodeGen/Generic/switch-crit-edge-constant.ll
rename to libclamav/c++/llvm/test/CodeGen/X86/switch-crit-edge-constant.ll
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
index 42f8cdd..f7ff5d5 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall1.ll
@@ -1,11 +1,14 @@
 ; RUN: llc < %s -march=x86 -tailcallopt | grep TAILCALL | count 5
 
+; With -tailcallopt, CodeGen guarantees a tail call optimization
+; for all of these.
+
 declare fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4)
 
 define fastcc i32 @tailcaller(i32 %in1, i32 %in2) nounwind {
 entry:
-	%tmp11 = tail call fastcc i32 @tailcallee(i32 %in1, i32 %in2, i32 %in1, i32 %in2)
-	ret i32 %tmp11
+  %tmp11 = tail call fastcc i32 @tailcallee(i32 %in1, i32 %in2, i32 %in1, i32 %in2)
+  ret i32 %tmp11
 }
 
 declare fastcc i8* @alias_callee()
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcall2.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcall2.ll
new file mode 100644
index 0000000..80bab61
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcall2.ll
@@ -0,0 +1,197 @@
+; RUN: llc < %s -march=x86    -asm-verbose=false | FileCheck %s -check-prefix=32
+; RUN: llc < %s -march=x86-64 -asm-verbose=false | FileCheck %s -check-prefix=64
+
+define void @t1(i32 %x) nounwind ssp {
+entry:
+; 32: t1:
+; 32: jmp {{_?}}foo
+
+; 64: t1:
+; 64: jmp {{_?}}foo
+  tail call void @foo() nounwind
+  ret void
+}
+
+declare void @foo()
+
+define void @t2() nounwind ssp {
+entry:
+; 32: t2:
+; 32: jmp {{_?}}foo2
+
+; 64: t2:
+; 64: jmp {{_?}}foo2
+  %0 = tail call i32 @foo2() nounwind
+  ret void
+}
+
+declare i32 @foo2()
+
+define void @t3() nounwind ssp {
+entry:
+; 32: t3:
+; 32: jmp {{_?}}foo3
+
+; 64: t3:
+; 64: jmp {{_?}}foo3
+  %0 = tail call i32 @foo3() nounwind
+  ret void
+}
+
+declare i32 @foo3()
+
+define void @t4(void (i32)* nocapture %x) nounwind ssp {
+entry:
+; 32: t4:
+; 32: call *
+; FIXME: gcc can generate a tailcall for this. But it's tricky.
+
+; 64: t4:
+; 64-NOT: call
+; 64: jmpq *
+  tail call void %x(i32 0) nounwind
+  ret void
+}
+
+define void @t5(void ()* nocapture %x) nounwind ssp {
+entry:
+; 32: t5:
+; 32-NOT: call
+; 32: jmpl *
+
+; 64: t5:
+; 64-NOT: call
+; 64: jmpq *
+  tail call void %x() nounwind
+  ret void
+}
+
+define i32 @t6(i32 %x) nounwind ssp {
+entry:
+; 32: t6:
+; 32: call {{_?}}t6
+; 32: jmp {{_?}}bar
+
+; 64: t6:
+; 64: jmp {{_?}}t6
+; 64: jmp {{_?}}bar
+  %0 = icmp slt i32 %x, 10
+  br i1 %0, label %bb, label %bb1
+
+bb:
+  %1 = add nsw i32 %x, -1
+  %2 = tail call i32 @t6(i32 %1) nounwind ssp
+  ret i32 %2
+
+bb1:
+  %3 = tail call i32 @bar(i32 %x) nounwind
+  ret i32 %3
+}
+
+declare i32 @bar(i32)
+
+define i32 @t7(i32 %a, i32 %b, i32 %c) nounwind ssp {
+entry:
+; 32: t7:
+; 32: jmp {{_?}}bar2
+
+; 64: t7:
+; 64: jmp {{_?}}bar2
+  %0 = tail call i32 @bar2(i32 %a, i32 %b, i32 %c) nounwind
+  ret i32 %0
+}
+
+declare i32 @bar2(i32, i32, i32)
+
+define signext i16 @t8() nounwind ssp {
+entry:
+; 32: t8:
+; 32: call {{_?}}bar3
+
+; 64: t8:
+; 64: callq {{_?}}bar3
+  %0 = tail call signext i16 @bar3() nounwind      ; <i16> [#uses=1]
+  ret i16 %0
+}
+
+declare signext i16 @bar3()
+
+define signext i16 @t9(i32 (i32)* nocapture %x) nounwind ssp {
+entry:
+; 32: t9:
+; 32: call *
+
+; 64: t9:
+; 64: callq *
+  %0 = bitcast i32 (i32)* %x to i16 (i32)*
+  %1 = tail call signext i16 %0(i32 0) nounwind
+  ret i16 %1
+}
+
+define void @t10() nounwind ssp {
+entry:
+; 32: t10:
+; 32: call
+
+; 64: t10:
+; 64: callq
+  %0 = tail call i32 @foo4() noreturn nounwind
+  unreachable
+}
+
+declare i32 @foo4()
+
+define i32 @t11(i32 %x, i32 %y, i32 %z.0, i32 %z.1, i32 %z.2) nounwind ssp {
+; In 32-bit mode, it's emitting a bunch of dead loads that are not being
+; eliminated currently.
+
+; 32: t11:
+; 32-NOT: subl ${{[0-9]+}}, %esp
+; 32: jne
+; 32-NOT: movl
+; 32-NOT: addl ${{[0-9]+}}, %esp
+; 32: jmp {{_?}}foo5
+
+; 64: t11:
+; 64-NOT: subq ${{[0-9]+}}, %esp
+; 64-NOT: addq ${{[0-9]+}}, %esp
+; 64: jmp {{_?}}foo5
+entry:
+  %0 = icmp eq i32 %x, 0
+  br i1 %0, label %bb6, label %bb
+
+bb:
+  %1 = tail call i32 @foo5(i32 %x, i32 %y, i32 %z.0, i32 %z.1, i32 %z.2) nounwind
+  ret i32 %1
+
+bb6:
+  ret i32 0
+}
+
+declare i32 @foo5(i32, i32, i32, i32, i32)
+
+%struct.t = type { i32, i32, i32, i32, i32 }
+
+define i32 @t12(i32 %x, i32 %y, %struct.t* byval align 4 %z) nounwind ssp {
+; 32: t12:
+; 32-NOT: subl ${{[0-9]+}}, %esp
+; 32-NOT: addl ${{[0-9]+}}, %esp
+; 32: jmp {{_?}}foo6
+
+; 64: t12:
+; 64-NOT: subq ${{[0-9]+}}, %esp
+; 64-NOT: addq ${{[0-9]+}}, %esp
+; 64: jmp {{_?}}foo6
+entry:
+  %0 = icmp eq i32 %x, 0
+  br i1 %0, label %bb2, label %bb
+
+bb:
+  %1 = tail call i32 @foo6(i32 %x, i32 %y, %struct.t* byval align 4 %z) nounwind
+  ret i32 %1
+
+bb2:
+  ret i32 0
+}
+
+declare i32 @foo6(i32, i32, %struct.t* byval align 4)
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/tailcallfp2.ll b/libclamav/c++/llvm/test/CodeGen/X86/tailcallfp2.ll
index be4f96c..3841f51 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/tailcallfp2.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/tailcallfp2.ll
@@ -2,7 +2,7 @@
 
 declare i32 @putchar(i32)
 
-define fastcc i32 @checktail(i32 %x, i32* %f, i32 %g) {
+define fastcc i32 @checktail(i32 %x, i32* %f, i32 %g) nounwind {
         %tmp1 = icmp sgt i32 %x, 0
         br i1 %tmp1, label %if-then, label %if-else
 
@@ -18,8 +18,8 @@ if-else:
 }
 
 
-define i32 @main() { 
+define i32 @main() nounwind { 
  %f   = bitcast i32 (i32, i32*, i32)* @checktail to i32*
  %res = tail call fastcc i32 @checktail( i32 10, i32* %f,i32 10)
  ret i32 %res
-}
\ No newline at end of file
+}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-coalesce.ll b/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-coalesce.ll
index d0e13f6..4c37225 100644
--- a/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-coalesce.ll
+++ b/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-coalesce.ll
@@ -1,4 +1,4 @@
-; RUN: llc < %s -march=x86 | grep mov | count 5
+; RUN: llc < %s -march=x86 | grep mov | count 4
 ; rdar://6523745
 
 @"\01LC" = internal constant [4 x i8] c"%d\0A\00"		; <[4 x i8]*> [#uses=1]
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-delete.ll b/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-delete.ll
deleted file mode 100644
index 77e3c75..0000000
--- a/libclamav/c++/llvm/test/CodeGen/X86/twoaddr-delete.ll
+++ /dev/null
@@ -1,43 +0,0 @@
-; RUN: llc < %s -march=x86 -stats |& grep {twoaddrinstr} | grep {Number of dead instructions deleted}
-
-	%struct.anon = type { [3 x double], double, %struct.node*, [64 x %struct.bnode*], [64 x %struct.bnode*] }
-	%struct.bnode = type { i16, double, [3 x double], i32, i32, [3 x double], [3 x double], [3 x double], double, %struct.bnode*, %struct.bnode* }
-	%struct.node = type { i16, double, [3 x double], i32, i32 }
-
-define i32 @main(i32 %argc, i8** nocapture %argv) nounwind {
-entry:
-	%0 = malloc %struct.anon		; <%struct.anon*> [#uses=2]
-	%1 = getelementptr %struct.anon* %0, i32 0, i32 2		; <%struct.node**> [#uses=1]
-	br label %bb14.i
-
-bb14.i:		; preds = %bb14.i, %entry
-	%i8.0.reg2mem.0.i = phi i32 [ 0, %entry ], [ %2, %bb14.i ]		; <i32> [#uses=1]
-	%2 = add i32 %i8.0.reg2mem.0.i, 1		; <i32> [#uses=2]
-	%exitcond74.i = icmp eq i32 %2, 32		; <i1> [#uses=1]
-	br i1 %exitcond74.i, label %bb32.i, label %bb14.i
-
-bb32.i:		; preds = %bb32.i, %bb14.i
-	%tmp.0.reg2mem.0.i = phi i32 [ %indvar.next63.i, %bb32.i ], [ 0, %bb14.i ]		; <i32> [#uses=1]
-	%indvar.next63.i = add i32 %tmp.0.reg2mem.0.i, 1		; <i32> [#uses=2]
-	%exitcond64.i = icmp eq i32 %indvar.next63.i, 64		; <i1> [#uses=1]
-	br i1 %exitcond64.i, label %bb47.loopexit.i, label %bb32.i
-
-bb.i.i:		; preds = %bb47.loopexit.i
-	unreachable
-
-stepsystem.exit.i:		; preds = %bb47.loopexit.i
-	store %struct.node* null, %struct.node** %1, align 4
-	br label %bb.i6.i
-
-bb.i6.i:		; preds = %bb.i6.i, %stepsystem.exit.i
-	br i1 false, label %bb107.i.i, label %bb.i6.i
-
-bb107.i.i:		; preds = %bb107.i.i, %bb.i6.i
-	%q_addr.0.i.i.in = phi %struct.bnode** [ null, %bb107.i.i ], [ %3, %bb.i6.i ]		; <%struct.bnode**> [#uses=0]
-	br label %bb107.i.i
-
-bb47.loopexit.i:		; preds = %bb32.i
-	%3 = getelementptr %struct.anon* %0, i32 0, i32 4, i32 0		; <%struct.bnode**> [#uses=1]
-	%4 = icmp eq %struct.node* null, null		; <i1> [#uses=1]
-	br i1 %4, label %stepsystem.exit.i, label %bb.i.i
-}
diff --git a/libclamav/c++/llvm/test/CodeGen/X86/zext-trunc.ll b/libclamav/c++/llvm/test/CodeGen/X86/zext-trunc.ll
new file mode 100644
index 0000000..b9ffbe8
--- /dev/null
+++ b/libclamav/c++/llvm/test/CodeGen/X86/zext-trunc.ll
@@ -0,0 +1,13 @@
+; RUN: llc < %s -march=x86-64 | FileCheck %s
+; rdar://7570931
+
+define i64 @foo(i64 %a, i64 %b) nounwind {
+; CHECK: foo:
+; CHECK: leal
+; CHECK-NOT: movl
+; CHECK: ret
+  %c = add i64 %a, %b
+  %d = trunc i64 %c to i32
+  %e = zext i32 %d to i64
+  ret i64 %e
+}
diff --git a/libclamav/c++/llvm/test/Feature/alignment.ll b/libclamav/c++/llvm/test/Feature/alignment.ll
index 409efeb..ef35a13 100644
--- a/libclamav/c++/llvm/test/Feature/alignment.ll
+++ b/libclamav/c++/llvm/test/Feature/alignment.ll
@@ -19,3 +19,7 @@ define i32* @test2() {
         ret i32* %X
 }
 
+define void @test3() alignstack(16) {
+        ret void
+}
+
diff --git a/libclamav/c++/llvm/test/Feature/unions.ll b/libclamav/c++/llvm/test/Feature/unions.ll
new file mode 100644
index 0000000..9d6c36b
--- /dev/null
+++ b/libclamav/c++/llvm/test/Feature/unions.ll
@@ -0,0 +1,12 @@
+; RUN: llvm-as < %s | llvm-dis > %t1.ll
+; RUN: llvm-as %t1.ll -o - | llvm-dis > %t2.ll
+; RUN: diff %t1.ll %t2.ll
+
+%union.anon = type union { i8, i32, float }
+
+ at union1 = constant union { i32, i8 } { i32 4 }
+ at union2 = constant union { i32, i8 } insertvalue(union { i32, i8 } undef, i32 4, 0)
+
+define void @"Unions" () {
+  ret void
+}
diff --git a/libclamav/c++/llvm/test/Other/2007-06-05-PassID.ll b/libclamav/c++/llvm/test/Other/2007-06-05-PassID.ll
index 7a03544..2554b8b 100644
--- a/libclamav/c++/llvm/test/Other/2007-06-05-PassID.ll
+++ b/libclamav/c++/llvm/test/Other/2007-06-05-PassID.ll
@@ -1,4 +1,4 @@
-;RUN: opt < %s -analyze -dot-cfg-only -disable-output 2>/dev/null
+;RUN: opt < %s -analyze -dot-cfg-only 2>/dev/null
 ;PR 1497
 
 define void @foo() {
diff --git a/libclamav/c++/llvm/test/Other/2007-06-28-PassManager.ll b/libclamav/c++/llvm/test/Other/2007-06-28-PassManager.ll
index f162a40..0ed2759 100644
--- a/libclamav/c++/llvm/test/Other/2007-06-28-PassManager.ll
+++ b/libclamav/c++/llvm/test/Other/2007-06-28-PassManager.ll
@@ -1,6 +1,6 @@
-; RUN: opt < %s -analyze -inline -disable-output
+; RUN: opt < %s -analyze -inline
 ; PR1526
-; RUN: opt < %s -analyze -indvars -disable-output
+; RUN: opt < %s -analyze -indvars
 ; PR1539
 define i32 @test1() {
        ret i32 0
diff --git a/libclamav/c++/llvm/test/Other/constant-fold-gep.ll b/libclamav/c++/llvm/test/Other/constant-fold-gep.ll
new file mode 100644
index 0000000..2888b3d
--- /dev/null
+++ b/libclamav/c++/llvm/test/Other/constant-fold-gep.ll
@@ -0,0 +1,428 @@
+; "PLAIN" - No optimizations. This tests the target-independent
+; constant folder.
+; RUN: opt -S -o - < %s | FileCheck --check-prefix=PLAIN %s
+
+; "OPT" - Optimizations but no targetdata. This tests target-independent
+; folding in the optimizers.
+; RUN: opt -S -o - -instcombine -globalopt < %s | FileCheck --check-prefix=OPT %s
+
+; "TO" - Optimizations and targetdata. This tests target-dependent
+; folding in the optimizers.
+; RUN: opt -S -o - -instcombine -globalopt -default-data-layout="e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64" < %s | FileCheck --check-prefix=TO %s
+
+; "SCEV" - ScalarEvolution but no targetdata.
+; RUN: opt -analyze -scalar-evolution < %s | FileCheck --check-prefix=SCEV %s
+
+; ScalarEvolution with targetdata isn't interesting on these testcases
+; because ScalarEvolution doesn't attempt to duplicate all of instcombine's
+; and the constant folders' folding.
+
+; PLAIN: %0 = type { i1, double }
+; PLAIN: %1 = type { double, float, double, double }
+; PLAIN: %2 = type { i1, i1* }
+; PLAIN: %3 = type { i64, i64 }
+; OPT: %0 = type { i1, double }
+; OPT: %1 = type { double, float, double, double }
+; OPT: %2 = type { i1, i1* }
+; OPT: %3 = type { i64, i64 }
+
+; The automatic constant folder in opt does not have targetdata access, so
+; it can't fold gep arithmetic, in general. However, the constant folder run
+; from instcombine and global opt can use targetdata.
+
+; PLAIN: @G8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1)
+; PLAIN: @G1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1)
+; PLAIN: @F8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2)
+; PLAIN: @F1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2)
+; PLAIN: @H8 = global i8* getelementptr (i8* null, i32 -1)
+; PLAIN: @H1 = global i1* getelementptr (i1* null, i32 -1)
+; OPT: @G8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1)
+; OPT: @G1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1)
+; OPT: @F8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2)
+; OPT: @F1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2)
+; OPT: @H8 = global i8* getelementptr (i8* null, i32 -1)
+; OPT: @H1 = global i1* getelementptr (i1* null, i32 -1)
+; TO: @G8 = global i8* null
+; TO: @G1 = global i1* null
+; TO: @F8 = global i8* inttoptr (i64 -1 to i8*)
+; TO: @F1 = global i1* inttoptr (i64 -1 to i1*)
+; TO: @H8 = global i8* inttoptr (i64 -1 to i8*)
+; TO: @H1 = global i1* inttoptr (i64 -1 to i1*)
+
+ at G8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1)
+ at G1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1)
+ at F8 = global i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2)
+ at F1 = global i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2)
+ at H8 = global i8* getelementptr (i8* inttoptr (i32 0 to i8*), i32 -1)
+ at H1 = global i1* getelementptr (i1* inttoptr (i32 0 to i1*), i32 -1)
+
+; The target-independent folder should be able to do some clever
+; simplifications on sizeof, alignof, and offsetof expressions. The
+; target-dependent folder should fold these down to constants.
+
+; PLAIN: @a = constant i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2310)
+; PLAIN: @b = constant i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; PLAIN: @c = constant i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2)
+; PLAIN: @d = constant i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 11)
+; PLAIN: @e = constant i64 ptrtoint (double* getelementptr (%1* null, i64 0, i32 2) to i64)
+; PLAIN: @f = constant i64 1
+; PLAIN: @g = constant i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; PLAIN: @h = constant i64 ptrtoint (i1** getelementptr (i1** null, i32 1) to i64)
+; PLAIN: @i = constant i64 ptrtoint (i1** getelementptr (%2* null, i64 0, i32 1) to i64)
+; OPT: @a = constant i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2310)
+; OPT: @b = constant i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; OPT: @c = constant i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2)
+; OPT: @d = constant i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 11)
+; OPT: @e = constant i64 ptrtoint (double* getelementptr (%1* null, i64 0, i32 2) to i64)
+; OPT: @f = constant i64 1
+; OPT: @g = constant i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; OPT: @h = constant i64 ptrtoint (i1** getelementptr (i1** null, i32 1) to i64)
+; OPT: @i = constant i64 ptrtoint (i1** getelementptr (%2* null, i64 0, i32 1) to i64)
+; TO: @a = constant i64 18480
+; TO: @b = constant i64 8
+; TO: @c = constant i64 16
+; TO: @d = constant i64 88
+; TO: @e = constant i64 16
+; TO: @f = constant i64 1
+; TO: @g = constant i64 8
+; TO: @h = constant i64 8
+; TO: @i = constant i64 8
+
+ at a = constant i64 mul (i64 3, i64 mul (i64 ptrtoint ({[7 x double], [7 x double]}* getelementptr ({[7 x double], [7 x double]}* null, i64 11) to i64), i64 5))
+ at b = constant i64 ptrtoint ([13 x double]* getelementptr ({i1, [13 x double]}* null, i64 0, i32 1) to i64)
+ at c = constant i64 ptrtoint (double* getelementptr ({double, double, double, double}* null, i64 0, i32 2) to i64)
+ at d = constant i64 ptrtoint (double* getelementptr ([13 x double]* null, i64 0, i32 11) to i64)
+ at e = constant i64 ptrtoint (double* getelementptr ({double, float, double, double}* null, i64 0, i32 2) to i64)
+ at f = constant i64 ptrtoint (<{ i16, i128 }>* getelementptr ({i1, <{ i16, i128 }>}* null, i64 0, i32 1) to i64)
+ at g = constant i64 ptrtoint ({double, double}* getelementptr ({i1, {double, double}}* null, i64 0, i32 1) to i64)
+ at h = constant i64 ptrtoint (double** getelementptr (double** null, i64 1) to i64)
+ at i = constant i64 ptrtoint (double** getelementptr ({i1, double*}* null, i64 0, i32 1) to i64)
+
+; The target-dependent folder should cast GEP indices to integer-sized pointers.
+
+; PLAIN: @M = constant i64* getelementptr (i64* null, i32 1)
+; PLAIN: @N = constant i64* getelementptr (%3* null, i32 0, i32 1)
+; PLAIN: @O = constant i64* getelementptr ([2 x i64]* null, i32 0, i32 1)
+; OPT: @M = constant i64* getelementptr (i64* null, i32 1)
+; OPT: @N = constant i64* getelementptr (%3* null, i32 0, i32 1)
+; OPT: @O = constant i64* getelementptr ([2 x i64]* null, i32 0, i32 1)
+; TO: @M = constant i64* inttoptr (i64 8 to i64*)
+; TO: @N = constant i64* inttoptr (i64 8 to i64*)
+; TO: @O = constant i64* inttoptr (i64 8 to i64*)
+
+ at M = constant i64* getelementptr (i64 *null, i32 1)
+ at N = constant i64* getelementptr ({ i64, i64 } *null, i32 0, i32 1)
+ at O = constant i64* getelementptr ([2 x i64] *null, i32 0, i32 1)
+
+; Duplicate all of the above as function return values rather than
+; global initializers.
+
+; PLAIN: define i8* @goo8() nounwind {
+; PLAIN:   %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1) to i8*
+; PLAIN:   ret i8* %t
+; PLAIN: }
+; PLAIN: define i1* @goo1() nounwind {
+; PLAIN:   %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1) to i1*
+; PLAIN:   ret i1* %t
+; PLAIN: }
+; PLAIN: define i8* @foo8() nounwind {
+; PLAIN:   %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2) to i8*
+; PLAIN:   ret i8* %t
+; PLAIN: }
+; PLAIN: define i1* @foo1() nounwind {
+; PLAIN:   %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2) to i1*
+; PLAIN:   ret i1* %t
+; PLAIN: }
+; PLAIN: define i8* @hoo8() nounwind {
+; PLAIN:   %t = bitcast i8* getelementptr (i8* null, i32 -1) to i8*
+; PLAIN:   ret i8* %t
+; PLAIN: }
+; PLAIN: define i1* @hoo1() nounwind {
+; PLAIN:   %t = bitcast i1* getelementptr (i1* null, i32 -1) to i1*
+; PLAIN:   ret i1* %t
+; PLAIN: }
+; OPT: define i8* @goo8() nounwind {
+; OPT:   ret i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1)
+; OPT: }
+; OPT: define i1* @goo1() nounwind {
+; OPT:   ret i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1)
+; OPT: }
+; OPT: define i8* @foo8() nounwind {
+; OPT:   ret i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2)
+; OPT: }
+; OPT: define i1* @foo1() nounwind {
+; OPT:   ret i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2)
+; OPT: }
+; OPT: define i8* @hoo8() nounwind {
+; OPT:   ret i8* getelementptr (i8* null, i32 -1)
+; OPT: }
+; OPT: define i1* @hoo1() nounwind {
+; OPT:   ret i1* getelementptr (i1* null, i32 -1)
+; OPT: }
+; TO: define i8* @goo8() nounwind {
+; TO:   ret i8* null
+; TO: }
+; TO: define i1* @goo1() nounwind {
+; TO:   ret i1* null
+; TO: }
+; TO: define i8* @foo8() nounwind {
+; TO:   ret i8* inttoptr (i64 -1 to i8*)
+; TO: }
+; TO: define i1* @foo1() nounwind {
+; TO:   ret i1* inttoptr (i64 -1 to i1*)
+; TO: }
+; TO: define i8* @hoo8() nounwind {
+; TO:   ret i8* inttoptr (i64 -1 to i8*)
+; TO: }
+; TO: define i1* @hoo1() nounwind {
+; TO:   ret i1* inttoptr (i64 -1 to i1*)
+; TO: }
+; SCEV: Classifying expressions for: @goo8
+; SCEV:   %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1) to i8*
+; SCEV:   -->  ((-1 * sizeof(i8)) + inttoptr (i32 1 to i8*))
+; SCEV: Classifying expressions for: @goo1
+; SCEV:   %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1) to i1*
+; SCEV:   -->  ((-1 * sizeof(i1)) + inttoptr (i32 1 to i1*))
+; SCEV: Classifying expressions for: @foo8
+; SCEV:   %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2) to i8*
+; SCEV:   -->  ((-2 * sizeof(i8)) + inttoptr (i32 1 to i8*))
+; SCEV: Classifying expressions for: @foo1
+; SCEV:   %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2) to i1*
+; SCEV:   -->  ((-2 * sizeof(i1)) + inttoptr (i32 1 to i1*))
+; SCEV: Classifying expressions for: @hoo8
+; SCEV:   -->  (-1 * sizeof(i8))
+; SCEV: Classifying expressions for: @hoo1
+; SCEV:   -->  (-1 * sizeof(i1))
+
+define i8* @goo8() nounwind {
+  %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -1) to i8*
+  ret i8* %t
+}
+define i1* @goo1() nounwind {
+  %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -1) to i1*
+  ret i1* %t
+}
+define i8* @foo8() nounwind {
+  %t = bitcast i8* getelementptr (i8* inttoptr (i32 1 to i8*), i32 -2) to i8*
+  ret i8* %t
+}
+define i1* @foo1() nounwind {
+  %t = bitcast i1* getelementptr (i1* inttoptr (i32 1 to i1*), i32 -2) to i1*
+  ret i1* %t
+}
+define i8* @hoo8() nounwind {
+  %t = bitcast i8* getelementptr (i8* inttoptr (i32 0 to i8*), i32 -1) to i8*
+  ret i8* %t
+}
+define i1* @hoo1() nounwind {
+  %t = bitcast i1* getelementptr (i1* inttoptr (i32 0 to i1*), i32 -1) to i1*
+  ret i1* %t
+}
+
+; PLAIN: define i64 @fa() nounwind {
+; PLAIN:   %t = bitcast i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2310) to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fb() nounwind {
+; PLAIN:   %t = bitcast i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64) to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fc() nounwind {
+; PLAIN:   %t = bitcast i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2) to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fd() nounwind {
+; PLAIN:   %t = bitcast i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 11) to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fe() nounwind {
+; PLAIN:   %t = bitcast i64 ptrtoint (double* getelementptr (%1* null, i64 0, i32 2) to i64) to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @ff() nounwind {
+; PLAIN:   %t = bitcast i64 1 to i64
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fg() nounwind {
+; PLAIN:   %t = bitcast i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fh() nounwind {
+; PLAIN:   %t = bitcast i64 ptrtoint (i1** getelementptr (i1** null, i32 1) to i64)
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; PLAIN: define i64 @fi() nounwind {
+; PLAIN:   %t = bitcast i64 ptrtoint (i1** getelementptr (%2* null, i64 0, i32 1) to i64)
+; PLAIN:   ret i64 %t
+; PLAIN: }
+; OPT: define i64 @fa() nounwind {
+; OPT:   ret i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2310)
+; OPT: }
+; OPT: define i64 @fb() nounwind {
+; OPT:   ret i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; OPT: }
+; OPT: define i64 @fc() nounwind {
+; OPT:   ret i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2)
+; OPT: }
+; OPT: define i64 @fd() nounwind {
+; OPT:   ret i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 11)
+; OPT: }
+; OPT: define i64 @fe() nounwind {
+; OPT:   ret i64 ptrtoint (double* getelementptr (%1* null, i64 0, i32 2) to i64)
+; OPT: }
+; OPT: define i64 @ff() nounwind {
+; OPT:   ret i64 1
+; OPT: }
+; OPT: define i64 @fg() nounwind {
+; OPT:   ret i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; OPT: }
+; OPT: define i64 @fh() nounwind {
+; OPT:   ret i64 ptrtoint (i1** getelementptr (i1** null, i32 1) to i64)
+; OPT: }
+; OPT: define i64 @fi() nounwind {
+; OPT:   ret i64 ptrtoint (i1** getelementptr (%2* null, i64 0, i32 1) to i64)
+; OPT: }
+; TO: define i64 @fa() nounwind {
+; TO:   ret i64 18480
+; TO: }
+; TO: define i64 @fb() nounwind {
+; TO:   ret i64 8
+; TO: }
+; TO: define i64 @fc() nounwind {
+; TO:   ret i64 16
+; TO: }
+; TO: define i64 @fd() nounwind {
+; TO:   ret i64 88
+; TO: }
+; TO: define i64 @fe() nounwind {
+; TO:   ret i64 16
+; TO: }
+; TO: define i64 @ff() nounwind {
+; TO:   ret i64 1
+; TO: }
+; TO: define i64 @fg() nounwind {
+; TO:   ret i64 8
+; TO: }
+; TO: define i64 @fh() nounwind {
+; TO:   ret i64 8
+; TO: }
+; TO: define i64 @fi() nounwind {
+; TO:   ret i64 8
+; TO: }
+; SCEV: Classifying expressions for: @fa
+; SCEV:   %t = bitcast i64 mul (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2310) to i64 
+; SCEV:   -->  (2310 * sizeof(double))
+; SCEV: Classifying expressions for: @fb
+; SCEV:   %t = bitcast i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64) to i64 
+; SCEV:   -->  alignof(double)
+; SCEV: Classifying expressions for: @fc
+; SCEV:   %t = bitcast i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 2) to i64 
+; SCEV:   -->  (2 * sizeof(double))
+; SCEV: Classifying expressions for: @fd
+; SCEV:   %t = bitcast i64 mul nuw (i64 ptrtoint (double* getelementptr (double* null, i32 1) to i64), i64 11) to i64 
+; SCEV:   -->  (11 * sizeof(double))
+; SCEV: Classifying expressions for: @fe
+; SCEV:   %t = bitcast i64 ptrtoint (double* getelementptr (%1* null, i64 0, i32 2) to i64) to i64 
+; SCEV:   -->  offsetof({ double, float, double, double }, 2)
+; SCEV: Classifying expressions for: @ff
+; SCEV:   %t = bitcast i64 1 to i64 
+; SCEV:   -->  1
+; SCEV: Classifying expressions for: @fg
+; SCEV:   %t = bitcast i64 ptrtoint (double* getelementptr (%0* null, i64 0, i32 1) to i64)
+; SCEV:   -->  alignof(double)
+; SCEV: Classifying expressions for: @fh
+; SCEV:   %t = bitcast i64 ptrtoint (i1** getelementptr (i1** null, i32 1) to i64)
+; SCEV:   -->  sizeof(i1*)
+; SCEV: Classifying expressions for: @fi
+; SCEV:   %t = bitcast i64 ptrtoint (i1** getelementptr (%2* null, i64 0, i32 1) to i64)
+; SCEV:   -->  alignof(i1*)
+
+define i64 @fa() nounwind {
+  %t = bitcast i64 mul (i64 3, i64 mul (i64 ptrtoint ({[7 x double], [7 x double]}* getelementptr ({[7 x double], [7 x double]}* null, i64 11) to i64), i64 5)) to i64
+  ret i64 %t
+}
+define i64 @fb() nounwind {
+  %t = bitcast i64 ptrtoint ([13 x double]* getelementptr ({i1, [13 x double]}* null, i64 0, i32 1) to i64) to i64
+  ret i64 %t
+}
+define i64 @fc() nounwind {
+  %t = bitcast i64 ptrtoint (double* getelementptr ({double, double, double, double}* null, i64 0, i32 2) to i64) to i64
+  ret i64 %t
+}
+define i64 @fd() nounwind {
+  %t = bitcast i64 ptrtoint (double* getelementptr ([13 x double]* null, i64 0, i32 11) to i64) to i64
+  ret i64 %t
+}
+define i64 @fe() nounwind {
+  %t = bitcast i64 ptrtoint (double* getelementptr ({double, float, double, double}* null, i64 0, i32 2) to i64) to i64
+  ret i64 %t
+}
+define i64 @ff() nounwind {
+  %t = bitcast i64 ptrtoint (<{ i16, i128 }>* getelementptr ({i1, <{ i16, i128 }>}* null, i64 0, i32 1) to i64) to i64
+  ret i64 %t
+}
+define i64 @fg() nounwind {
+  %t = bitcast i64 ptrtoint ({double, double}* getelementptr ({i1, {double, double}}* null, i64 0, i32 1) to i64) to i64
+  ret i64 %t
+}
+define i64 @fh() nounwind {
+  %t = bitcast i64 ptrtoint (double** getelementptr (double** null, i32 1) to i64) to i64
+  ret i64 %t
+}
+define i64 @fi() nounwind {
+  %t = bitcast i64 ptrtoint (double** getelementptr ({i1, double*}* null, i64 0, i32 1) to i64) to i64
+  ret i64 %t
+}
+
+; PLAIN: define i64* @fM() nounwind {
+; PLAIN:   %t = bitcast i64* getelementptr (i64* null, i32 1) to i64*
+; PLAIN:   ret i64* %t
+; PLAIN: }
+; PLAIN: define i64* @fN() nounwind {
+; PLAIN:   %t = bitcast i64* getelementptr (%3* null, i32 0, i32 1) to i64*
+; PLAIN:   ret i64* %t
+; PLAIN: }
+; PLAIN: define i64* @fO() nounwind {
+; PLAIN:   %t = bitcast i64* getelementptr ([2 x i64]* null, i32 0, i32 1) to i64*
+; PLAIN:   ret i64* %t
+; PLAIN: }
+; OPT: define i64* @fM() nounwind {
+; OPT:   ret i64* getelementptr (i64* null, i32 1)
+; OPT: }
+; OPT: define i64* @fN() nounwind {
+; OPT:   ret i64* getelementptr (%3* null, i32 0, i32 1)
+; OPT: }
+; OPT: define i64* @fO() nounwind {
+; OPT:   ret i64* getelementptr ([2 x i64]* null, i32 0, i32 1)
+; OPT: }
+; TO: define i64* @fM() nounwind {
+; TO:   ret i64* inttoptr (i64 8 to i64*)
+; TO: }
+; TO: define i64* @fN() nounwind {
+; TO:   ret i64* inttoptr (i64 8 to i64*)
+; TO: }
+; TO: define i64* @fO() nounwind {
+; TO:   ret i64* inttoptr (i64 8 to i64*)
+; TO: }
+; SCEV: Classifying expressions for: @fM
+; SCEV:   %t = bitcast i64* getelementptr (i64* null, i32 1) to i64* 
+; SCEV:   -->  sizeof(i64)
+; SCEV: Classifying expressions for: @fN
+; SCEV:   %t = bitcast i64* getelementptr (%3* null, i32 0, i32 1) to i64* 
+; SCEV:   -->  sizeof(i64)
+; SCEV: Classifying expressions for: @fO
+; SCEV:   %t = bitcast i64* getelementptr ([2 x i64]* null, i32 0, i32 1) to i64* 
+; SCEV:   -->  sizeof(i64)
+
+define i64* @fM() nounwind {
+  %t = bitcast i64* getelementptr (i64 *null, i32 1) to i64*
+  ret i64* %t
+}
+define i64* @fN() nounwind {
+  %t = bitcast i64* getelementptr ({ i64, i64 } *null, i32 0, i32 1) to i64*
+  ret i64* %t
+}
+define i64* @fO() nounwind {
+  %t = bitcast i64* getelementptr ([2 x i64] *null, i32 0, i32 1) to i64*
+  ret i64* %t
+}
diff --git a/libclamav/c++/llvm/test/lib/llvm.exp b/libclamav/c++/llvm/test/lib/llvm.exp
index 2c1bef9..319cc11 100644
--- a/libclamav/c++/llvm/test/lib/llvm.exp
+++ b/libclamav/c++/llvm/test/lib/llvm.exp
@@ -301,6 +301,16 @@ proc llvm_supports_target { tgtName } {
   return 0
 }
 
+proc llvm_supports_darwin_and_target { tgtName } {
+  global target_triplet
+  if { [ llvm_supports_target $tgtName ] } {
+    if { [regexp darwin $target_triplet match] } {
+      return 1
+    }
+  }
+  return 0
+}
+
 # This procedure provides an interface to check the BINDINGS_TO_BUILD makefile
 # variable to see if a particular binding has been configured to build.
 proc llvm_supports_binding { name } {
diff --git a/libclamav/c++/llvm/test/lit.cfg b/libclamav/c++/llvm/test/lit.cfg
index 8e85168..0894d9b 100644
--- a/libclamav/c++/llvm/test/lit.cfg
+++ b/libclamav/c++/llvm/test/lit.cfg
@@ -114,6 +114,11 @@ for sub in ['llvmgcc', 'llvmgxx', 'compile_cxx', 'compile_c',
     if sub in ('llvmgcc', 'llvmgxx'):
         config.substitutions.append(('%' + sub,
                                      site_exp[sub] + ' -emit-llvm -w'))
+    # FIXME: This is a hack to avoid LLVMC tests failing due to a clang driver
+    #        warning when passing in "-fexceptions -fno-exceptions".
+    elif sub == 'compile_cxx':
+        config.substitutions.append(('%' + sub,
+                                  site_exp[sub].replace('-fno-exceptions', '')))
     else:
         config.substitutions.append(('%' + sub, site_exp[sub]))
 
@@ -127,6 +132,9 @@ targets = set(site_exp["TARGETS_TO_BUILD"].split())
 def llvm_supports_target(name):
     return name in targets
 
+def llvm_supports_darwin_and_target(name):
+    return 'darwin' in config.target_triple and llvm_supports_target(name)
+
 langs = set(site_exp['llvmgcc_langs'].split(','))
 def llvm_gcc_supports(name):
     return name in langs
diff --git a/libclamav/c++/llvm/tools/Makefile b/libclamav/c++/llvm/tools/Makefile
index 0340c7f..c9b9ff2 100644
--- a/libclamav/c++/llvm/tools/Makefile
+++ b/libclamav/c++/llvm/tools/Makefile
@@ -21,7 +21,8 @@ PARALLEL_DIRS := opt llvm-as llvm-dis \
                  llvm-ld llvm-prof llvm-link \
                  lli llvm-extract \
                  bugpoint llvm-bcanalyzer llvm-stub \
-                 llvm-mc llvmc
+                 llvm-mc llvmc \
+                 edis
 
 # Let users override the set of tools to build from the command line.
 ifdef ONLY_TOOLS
diff --git a/libclamav/c++/llvm/tools/edis/EDDisassembler.cpp b/libclamav/c++/llvm/tools/edis/EDDisassembler.cpp
new file mode 100644
index 0000000..99864fb
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDDisassembler.cpp
@@ -0,0 +1,386 @@
+//===-EDDisassembler.cpp - LLVM Enhanced Disassembler ---------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file implements the Enhanced Disassembly library's  disassembler class.
+// The disassembler is responsible for vending individual instructions according
+// to a given architecture and disassembly syntax.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/ADT/OwningPtr.h"
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/MC/MCAsmInfo.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/MC/MCDisassembler.h"
+#include "llvm/MC/MCExpr.h"
+#include "llvm/MC/MCInst.h"
+#include "llvm/MC/MCInstPrinter.h"
+#include "llvm/MC/MCStreamer.h"
+#include "llvm/MC/MCParser/AsmLexer.h"
+#include "llvm/MC/MCParser/AsmParser.h"
+#include "llvm/MC/MCParser/MCAsmParser.h"
+#include "llvm/MC/MCParser/MCParsedAsmOperand.h"
+#include "llvm/Support/MemoryBuffer.h"
+#include "llvm/Support/MemoryObject.h"
+#include "llvm/Support/SourceMgr.h"
+#include "llvm/Target/TargetAsmLexer.h"
+#include "llvm/Target/TargetAsmParser.h"
+#include "llvm/Target/TargetRegistry.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Target/TargetSelect.h"
+
+#include "EDDisassembler.h"
+#include "EDInst.h"
+
+#include "../../lib/Target/X86/X86GenEDInfo.inc"
+
+using namespace llvm;
+
+bool EDDisassembler::sInitialized = false;
+EDDisassembler::DisassemblerMap_t EDDisassembler::sDisassemblers;
+
+struct InfoMap {
+  Triple::ArchType Arch;
+  const char *String;
+  const InstInfo *Info;
+};
+
+static struct InfoMap infomap[] = {
+  { Triple::x86,          "i386-unknown-unknown",   instInfoX86 },
+  { Triple::x86_64,       "x86_64-unknown-unknown", instInfoX86 },
+  { Triple::InvalidArch,  NULL,                     NULL        }
+};
+
+/// infoFromArch - Returns the InfoMap corresponding to a given architecture,
+///   or NULL if there is an error
+///
+/// @arg arch - The Triple::ArchType for the desired architecture
+static const InfoMap *infoFromArch(Triple::ArchType arch) {
+  unsigned int infoIndex;
+  
+  for (infoIndex = 0; infomap[infoIndex].String != NULL; ++infoIndex) {
+    if(arch == infomap[infoIndex].Arch)
+      return &infomap[infoIndex];
+  }
+  
+  return NULL;
+}
+
+/// getLLVMSyntaxVariant - gets the constant to use to get an assembly printer
+///   for the desired assembly syntax, suitable for passing to 
+///   Target::createMCInstPrinter()
+///
+/// @arg arch   - The target architecture
+/// @arg syntax - The assembly syntax in sd form
+static int getLLVMSyntaxVariant(Triple::ArchType arch,
+                                EDAssemblySyntax_t syntax) {
+  switch (syntax) {
+  default:
+    return -1;
+  // Mappings below from X86AsmPrinter.cpp
+  case kEDAssemblySyntaxX86ATT:
+    if (arch == Triple::x86 || arch == Triple::x86_64)
+      return 0;
+    else
+      return -1;
+  case kEDAssemblySyntaxX86Intel:
+    if (arch == Triple::x86 || arch == Triple::x86_64)
+      return 1;
+    else
+      return -1;
+  }
+}
+
+#define BRINGUP_TARGET(tgt)           \
+  LLVMInitialize##tgt##TargetInfo();  \
+  LLVMInitialize##tgt##Target();      \
+  LLVMInitialize##tgt##AsmPrinter();  \
+  LLVMInitialize##tgt##AsmParser();   \
+  LLVMInitialize##tgt##Disassembler();
+
+void EDDisassembler::initialize() {
+  if (sInitialized)
+    return;
+  
+  sInitialized = true;
+  
+  BRINGUP_TARGET(X86)
+}
+
+#undef BRINGUP_TARGET
+
+EDDisassembler *EDDisassembler::getDisassembler(Triple::ArchType arch,
+                                                EDAssemblySyntax_t syntax) {
+  CPUKey key;
+  key.Arch = arch;
+  key.Syntax = syntax;
+  
+  EDDisassembler::DisassemblerMap_t::iterator i = sDisassemblers.find(key);
+  
+  if (i != sDisassemblers.end()) {
+    return i->second;
+  }
+  else {
+    EDDisassembler* sdd = new EDDisassembler(key);
+    if(!sdd->valid()) {
+      delete sdd;
+      return NULL;
+    }
+    
+    sDisassemblers[key] = sdd;
+    
+    return sdd;
+  }
+  
+  return NULL;
+}
+
+EDDisassembler *EDDisassembler::getDisassembler(StringRef str,
+                                                EDAssemblySyntax_t syntax) {
+  Triple triple(str);
+  
+  return getDisassembler(triple.getArch(), syntax);
+}
+
+EDDisassembler::EDDisassembler(CPUKey &key) : 
+  Valid(false), ErrorString(), ErrorStream(ErrorString), Key(key) {
+  const InfoMap *infoMap = infoFromArch(key.Arch);
+  
+  if (!infoMap)
+    return;
+  
+  const char *triple = infoMap->String;
+  
+  int syntaxVariant = getLLVMSyntaxVariant(key.Arch, key.Syntax);
+  
+  if (syntaxVariant < 0)
+    return;
+  
+  std::string tripleString(triple);
+  std::string errorString;
+  
+  Tgt = TargetRegistry::lookupTarget(tripleString, 
+                                     errorString);
+  
+  if (!Tgt)
+    return;
+  
+  std::string featureString;
+  
+  OwningPtr<const TargetMachine>
+    targetMachine(Tgt->createTargetMachine(tripleString,
+                                           featureString));
+  
+  const TargetRegisterInfo *registerInfo = targetMachine->getRegisterInfo();
+  
+  if (!registerInfo)
+    return;
+  
+  AsmInfo.reset(Tgt->createAsmInfo(tripleString));
+  
+  if (!AsmInfo)
+    return;
+  
+  Disassembler.reset(Tgt->createMCDisassembler());
+  
+  if (!Disassembler)
+    return;
+  
+  InstString.reset(new std::string);
+  InstStream.reset(new raw_string_ostream(*InstString));
+  
+  InstPrinter.reset(Tgt->createMCInstPrinter(syntaxVariant,
+                                                *AsmInfo,
+                                                *InstStream));
+  
+  if (!InstPrinter)
+    return;
+    
+  GenericAsmLexer.reset(new AsmLexer(*AsmInfo));
+  SpecificAsmLexer.reset(Tgt->createAsmLexer(*AsmInfo));
+  SpecificAsmLexer->InstallLexer(*GenericAsmLexer);
+                          
+  InstInfos = infoMap->Info;
+  
+  initMaps(*targetMachine->getRegisterInfo());
+    
+  Valid = true;
+}
+
+EDDisassembler::~EDDisassembler() {
+  if(!valid())
+    return;
+}
+
+namespace {
+  /// EDMemoryObject - a subclass of MemoryObject that allows use of a callback
+  ///   as provided by the sd interface.  See MemoryObject.
+  class EDMemoryObject : public llvm::MemoryObject {
+  private:
+    EDByteReaderCallback Callback;
+    void *Arg;
+  public:
+    EDMemoryObject(EDByteReaderCallback callback,
+                   void *arg) : Callback(callback), Arg(arg) { }
+    ~EDMemoryObject() { }
+    uint64_t getBase() const { return 0x0; }
+    uint64_t getExtent() const { return (uint64_t)-1; }
+    int readByte(uint64_t address, uint8_t *ptr) const {
+      if(!Callback)
+        return -1;
+      
+      if(Callback(ptr, address, Arg))
+        return -1;
+      
+      return 0;
+    }
+  };
+}
+
+EDInst *EDDisassembler::createInst(EDByteReaderCallback byteReader, 
+                                   uint64_t address, 
+                                   void *arg) {
+  EDMemoryObject memoryObject(byteReader, arg);
+  
+  MCInst* inst = new MCInst;
+  uint64_t byteSize;
+  
+  if (!Disassembler->getInstruction(*inst,
+                                    byteSize,
+                                    memoryObject,
+                                    address,
+                                    ErrorStream)) {
+    delete inst;
+    return NULL;
+  }
+  else {
+    const InstInfo *thisInstInfo = &InstInfos[inst->getOpcode()];
+    
+    EDInst* sdInst = new EDInst(inst, byteSize, *this, thisInstInfo);
+    return sdInst;
+  }
+}
+
+void EDDisassembler::initMaps(const TargetRegisterInfo &registerInfo) {
+  unsigned numRegisters = registerInfo.getNumRegs();
+  unsigned registerIndex;
+  
+  for (registerIndex = 0; registerIndex < numRegisters; ++registerIndex) {
+    const char* registerName = registerInfo.get(registerIndex).Name;
+    
+    RegVec.push_back(registerName);
+    RegRMap[registerName] = registerIndex;
+  }
+  
+  if (Key.Arch == Triple::x86 ||
+      Key.Arch == Triple::x86_64) {
+    stackPointers.insert(registerIDWithName("SP"));
+    stackPointers.insert(registerIDWithName("ESP"));
+    stackPointers.insert(registerIDWithName("RSP"));
+    
+    programCounters.insert(registerIDWithName("IP"));
+    programCounters.insert(registerIDWithName("EIP"));
+    programCounters.insert(registerIDWithName("RIP"));
+  }
+}
+
+const char *EDDisassembler::nameWithRegisterID(unsigned registerID) const {
+  if (registerID >= RegVec.size())
+    return NULL;
+  else
+    return RegVec[registerID].c_str();
+}
+
+unsigned EDDisassembler::registerIDWithName(const char *name) const {
+  regrmap_t::const_iterator iter = RegRMap.find(std::string(name));
+  if (iter == RegRMap.end())
+    return 0;
+  else
+    return (*iter).second;
+}
+
+bool EDDisassembler::registerIsStackPointer(unsigned registerID) {
+  return (stackPointers.find(registerID) != stackPointers.end());
+}
+
+bool EDDisassembler::registerIsProgramCounter(unsigned registerID) {
+  return (programCounters.find(registerID) != programCounters.end());
+}
+
+int EDDisassembler::printInst(std::string& str,
+                              MCInst& inst) {
+  PrinterMutex.acquire();
+  
+  InstPrinter->printInst(&inst);
+  InstStream->flush();
+  str = *InstString;
+  InstString->clear();
+  
+  PrinterMutex.release();
+  
+  return 0;
+}
+
+int EDDisassembler::parseInst(SmallVectorImpl<MCParsedAsmOperand*> &operands,
+                              SmallVectorImpl<AsmToken> &tokens,
+                              const std::string &str) {
+  int ret = 0;
+  
+  const char *cStr = str.c_str();
+  MemoryBuffer *buf = MemoryBuffer::getMemBuffer(cStr, cStr + strlen(cStr));
+  
+  StringRef instName;
+  SMLoc instLoc;
+  
+  SourceMgr sourceMgr;
+  sourceMgr.AddNewSourceBuffer(buf, SMLoc()); // ownership of buf handed over
+  MCContext context;
+  OwningPtr<MCStreamer> streamer
+    (createNullStreamer(context));
+  AsmParser genericParser(sourceMgr, context, *streamer, *AsmInfo);
+  OwningPtr<TargetAsmParser> specificParser
+    (Tgt->createAsmParser(genericParser));
+  
+  AsmToken OpcodeToken = genericParser.Lex();
+  
+  if(OpcodeToken.is(AsmToken::Identifier)) {
+    instName = OpcodeToken.getString();
+    instLoc = OpcodeToken.getLoc();
+    if (specificParser->ParseInstruction(instName, instLoc, operands))
+      ret = -1;
+  }
+  else {
+    ret = -1;
+  }
+  
+  ParserMutex.acquire();
+  
+  if (!ret) {
+    GenericAsmLexer->setBuffer(buf);
+  
+    while (SpecificAsmLexer->Lex(),
+           SpecificAsmLexer->isNot(AsmToken::Eof) &&
+           SpecificAsmLexer->isNot(AsmToken::EndOfStatement)) {
+      if (SpecificAsmLexer->is(AsmToken::Error)) {
+        ret = -1;
+        break;
+      }
+      tokens.push_back(SpecificAsmLexer->getTok());
+    }
+  }
+
+  ParserMutex.release();
+  
+  return ret;
+}
+
+int EDDisassembler::llvmSyntaxVariant() const {
+  return LLVMSyntaxVariant;
+}
diff --git a/libclamav/c++/llvm/tools/edis/EDDisassembler.h b/libclamav/c++/llvm/tools/edis/EDDisassembler.h
new file mode 100644
index 0000000..6be9152
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDDisassembler.h
@@ -0,0 +1,248 @@
+//===-EDDisassembler.h - LLVM Enhanced Disassembler -------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for the Enhanced Disassembly library's
+// disassembler class.  The disassembler is responsible for vending individual
+// instructions according to a given architecture and disassembly syntax.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef EDDisassembler_
+#define EDDisassembler_
+
+#include "EDInfo.inc"
+
+#include "llvm-c/EnhancedDisassembly.h"
+
+#include "llvm/ADT/OwningPtr.h"
+#include "llvm/ADT/Triple.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/System/Mutex.h"
+
+#include <map>
+#include <set>
+#include <string>
+#include <vector>
+
+namespace llvm {
+class AsmLexer;
+class AsmToken;
+class MCContext;
+class MCAsmInfo;
+class MCAsmLexer;
+class AsmParser;
+class TargetAsmLexer;
+class TargetAsmParser;
+class MCDisassembler;
+class MCInstPrinter;
+class MCInst;
+class MCParsedAsmOperand;
+class MCStreamer;
+template <typename T> class SmallVectorImpl;
+class SourceMgr;
+class Target;
+class TargetRegisterInfo;
+}
+
+/// EDDisassembler - Encapsulates a disassembler for a single architecture and
+///   disassembly syntax.  Also manages the static disassembler registry.
+struct EDDisassembler {
+  ////////////////////
+  // Static members //
+  ////////////////////
+  
+  /// CPUKey - Encapsulates the descriptor of an architecture/disassembly-syntax
+  ///   pair
+  struct CPUKey {
+    /// The architecture type
+    llvm::Triple::ArchType Arch;
+    
+    /// The assembly syntax
+    EDAssemblySyntax_t Syntax;
+    
+    /// operator== - Equality operator
+    bool operator==(const CPUKey &key) const {
+      return (Arch == key.Arch &&
+              Syntax == key.Syntax);
+    }
+    
+    /// operator< - Less-than operator
+    bool operator<(const CPUKey &key) const {
+      if(Arch > key.Arch)
+        return false;
+      if(Syntax >= key.Syntax)
+        return false;
+      return true;
+    }
+  };
+  
+  typedef std::map<CPUKey, EDDisassembler*> DisassemblerMap_t;
+  
+  /// True if the disassembler registry has been initialized; false if not
+  static bool sInitialized;
+  /// A map from disassembler specifications to disassemblers.  Populated
+  ///   lazily.
+  static DisassemblerMap_t sDisassemblers;
+
+  /// getDisassembler - Returns the specified disassemble, or NULL on failure
+  ///
+  /// @arg arch   - The desired architecture
+  /// @arg syntax - The desired disassembly syntax
+  static EDDisassembler *getDisassembler(llvm::Triple::ArchType arch,
+                                         EDAssemblySyntax_t syntax);
+  
+  /// getDisassembler - Returns the disassembler for a given combination of
+  ///   CPU type, CPU subtype, and assembly syntax, or NULL on failure
+  ///
+  /// @arg str    - The string representation of the architecture triple, e.g.,
+  ///               "x86_64-apple-darwin"
+  /// @arg syntax - The disassembly syntax for the required disassembler
+  static EDDisassembler *getDisassembler(llvm::StringRef str,
+                                         EDAssemblySyntax_t syntax);
+  
+  /// initialize - Initializes the disassembler registry and the LLVM backend
+  static void initialize();
+  
+  ////////////////////////
+  // Per-object members //
+  ////////////////////////
+  
+  /// True only if the object has been fully and successfully initialized
+  bool Valid;
+  
+  /// The string that stores disassembler errors from the backend
+  std::string ErrorString;
+  /// The stream that wraps the ErrorString
+  llvm::raw_string_ostream ErrorStream;
+
+  /// The architecture/syntax pair for the current architecture
+  CPUKey Key;
+  /// The LLVM target corresponding to the disassembler
+  const llvm::Target *Tgt;
+  /// The assembly information for the target architecture
+  llvm::OwningPtr<const llvm::MCAsmInfo> AsmInfo;
+  /// The disassembler for the target architecture
+  llvm::OwningPtr<const llvm::MCDisassembler> Disassembler;
+  /// The output string for the instruction printer; must be guarded with 
+  ///   PrinterMutex
+  llvm::OwningPtr<std::string> InstString;
+  /// The output stream for the disassembler; must be guarded with
+  ///   PrinterMutex
+  llvm::OwningPtr<llvm::raw_string_ostream> InstStream;
+  /// The instruction printer for the target architecture; must be guarded with
+  ///   PrinterMutex when printing
+  llvm::OwningPtr<llvm::MCInstPrinter> InstPrinter;
+  /// The mutex that guards the instruction printer's printing functions, which
+  ///   use a shared stream
+  llvm::sys::Mutex PrinterMutex;
+  /// The array of instruction information provided by the TableGen backend for
+  ///   the target architecture
+  const InstInfo *InstInfos;
+  /// The target-specific lexer for use in tokenizing strings, in
+  ///   target-independent and target-specific portions
+  llvm::OwningPtr<llvm::AsmLexer> GenericAsmLexer;
+  llvm::OwningPtr<llvm::TargetAsmLexer> SpecificAsmLexer;
+  /// The guard for the above
+  llvm::sys::Mutex ParserMutex;
+  /// The LLVM number used for the target disassembly syntax variant
+  int LLVMSyntaxVariant;
+    
+  typedef std::vector<std::string> regvec_t;
+  typedef std::map<std::string, unsigned> regrmap_t;
+  
+  /// A vector of registers for quick mapping from LLVM register IDs to names
+  regvec_t RegVec;
+  /// A map of registers for quick mapping from register names to LLVM IDs
+  regrmap_t RegRMap;
+  
+  /// A set of register IDs for aliases of the stack pointer for the current
+  ///   architecture
+  std::set<unsigned> stackPointers;
+  /// A set of register IDs for aliases of the program counter for the current
+  ///   architecture
+  std::set<unsigned> programCounters;
+  
+  /// Constructor - initializes a disassembler with all the necessary objects,
+  ///   which come pre-allocated from the registry accessor function
+  ///
+  /// @arg key                - the architecture and disassembly syntax for the 
+  ///                           disassembler
+  EDDisassembler(CPUKey& key);
+  
+  /// valid - reports whether there was a failure in the constructor.
+  bool valid() {
+    return Valid;
+  }
+  
+  ~EDDisassembler();
+  
+  /// createInst - creates and returns an instruction given a callback and
+  ///   memory address, or NULL on failure
+  ///
+  /// @arg byteReader - A callback function that provides machine code bytes
+  /// @arg address    - The address of the first byte of the instruction,
+  ///                   suitable for passing to byteReader
+  /// @arg arg        - An opaque argument for byteReader
+  EDInst *createInst(EDByteReaderCallback byteReader, 
+                     uint64_t address, 
+                     void *arg);
+
+  /// initMaps - initializes regVec and regRMap using the provided register
+  ///   info
+  ///
+  /// @arg registerInfo - the register information to use as a source
+  void initMaps(const llvm::TargetRegisterInfo &registerInfo);
+  /// nameWithRegisterID - Returns the name (owned by the EDDisassembler) of a 
+  ///   register for a given register ID, or NULL on failure
+  ///
+  /// @arg registerID - the ID of the register to be queried
+  const char *nameWithRegisterID(unsigned registerID) const;
+  /// registerIDWithName - Returns the ID of a register for a given register
+  ///   name, or (unsigned)-1 on failure
+  ///
+  /// @arg name - The name of the register
+  unsigned registerIDWithName(const char *name) const;
+  
+  /// registerIsStackPointer - reports whether a register ID is an alias for the
+  ///   stack pointer register
+  ///
+  /// @arg registerID - The LLVM register ID
+  bool registerIsStackPointer(unsigned registerID);
+  /// registerIsStackPointer - reports whether a register ID is an alias for the
+  ///   stack pointer register
+  ///
+  /// @arg registerID - The LLVM register ID
+  bool registerIsProgramCounter(unsigned registerID);
+  
+  /// printInst - prints an MCInst to a string, returning 0 on success, or -1
+  ///   otherwise
+  ///
+  /// @arg str  - A reference to a string which is filled in with the string
+  ///             representation of the instruction
+  /// @arg inst - A reference to the MCInst to be printed
+  int printInst(std::string& str,
+                llvm::MCInst& inst);
+  
+  /// parseInst - extracts operands and tokens from a string for use in
+  ///   tokenizing the string.  Returns 0 on success, or -1 otherwise.
+  ///
+  /// @arg operands - A reference to a vector that will be filled in with the
+  ///                 parsed operands
+  /// @arg tokens   - A reference to a vector that will be filled in with the
+  ///                 tokens
+  /// @arg str      - The string representation of the instruction
+  int parseInst(llvm::SmallVectorImpl<llvm::MCParsedAsmOperand*> &operands,
+                llvm::SmallVectorImpl<llvm::AsmToken> &tokens,
+                const std::string &str);
+  
+  /// llvmSyntaxVariant - returns the LLVM syntax variant for this disassembler
+  int llvmSyntaxVariant() const;  
+};
+
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDInst.cpp b/libclamav/c++/llvm/tools/edis/EDInst.cpp
new file mode 100644
index 0000000..9ed2700
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDInst.cpp
@@ -0,0 +1,205 @@
+//===-EDInst.cpp - LLVM Enhanced Disassembler -----------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file implements the Enhanced Disassembly library's instruction class.
+// The instruction is responsible for vending the string representation, 
+// individual tokens, and operands for a single instruction.
+//
+//===----------------------------------------------------------------------===//
+
+#include "EDDisassembler.h"
+#include "EDInst.h"
+#include "EDOperand.h"
+#include "EDToken.h"
+
+#include "llvm/MC/MCInst.h"
+
+using namespace llvm;
+
+EDInst::EDInst(llvm::MCInst *inst,
+               uint64_t byteSize, 
+               EDDisassembler &disassembler,
+               const InstInfo *info) :
+  Disassembler(disassembler),
+  Inst(inst),
+  ThisInstInfo(info),
+  ByteSize(byteSize),
+  BranchTarget(-1),
+  MoveSource(-1),
+  MoveTarget(-1) {
+}
+
+EDInst::~EDInst() {
+  unsigned int index;
+  unsigned int numOperands = Operands.size();
+  
+  for (index = 0; index < numOperands; ++index)
+    delete Operands[index];
+  
+  unsigned int numTokens = Tokens.size();
+  
+  for (index = 0; index < numTokens; ++index)
+    delete Tokens[index];
+  
+  delete Inst;
+}
+
+uint64_t EDInst::byteSize() {
+  return ByteSize;
+}
+
+int EDInst::stringify() {
+  if (StringifyResult.valid())
+    return StringifyResult.result();
+  
+  if (Disassembler.printInst(String, *Inst))
+    return StringifyResult.setResult(-1);
+
+  OperandOrder = ThisInstInfo->operandOrders[Disassembler.llvmSyntaxVariant()];
+  
+  return StringifyResult.setResult(0);
+}
+
+int EDInst::getString(const char*& str) {
+  if (stringify())
+    return -1;
+  
+  str = String.c_str();
+  
+  return 0;
+}
+
+unsigned EDInst::instID() {
+  return Inst->getOpcode();
+}
+
+bool EDInst::isBranch() {
+  if (ThisInstInfo)
+    return ThisInstInfo->instructionFlags & kInstructionFlagBranch;
+  else
+    return false;
+}
+
+bool EDInst::isMove() {
+  if (ThisInstInfo)
+    return ThisInstInfo->instructionFlags & kInstructionFlagMove;
+  else
+    return false;
+}
+
+int EDInst::parseOperands() {
+  if (ParseResult.valid())
+    return ParseResult.result(); 
+  
+  if (!ThisInstInfo)
+    return ParseResult.setResult(-1);
+  
+  unsigned int opIndex;
+  unsigned int mcOpIndex = 0;
+  
+  for (opIndex = 0; opIndex < ThisInstInfo->numOperands; ++opIndex) {
+    if (isBranch() &&
+        (ThisInstInfo->operandFlags[opIndex] & kOperandFlagTarget)) {
+      BranchTarget = opIndex;
+    }
+    else if (isMove()) {
+      if (ThisInstInfo->operandFlags[opIndex] & kOperandFlagSource)
+        MoveSource = opIndex;
+      else if (ThisInstInfo->operandFlags[opIndex] & kOperandFlagTarget)
+        MoveTarget = opIndex;
+    }
+    
+    EDOperand *operand = new EDOperand(Disassembler, *this, opIndex, mcOpIndex);
+    
+    Operands.push_back(operand);
+  }
+  
+  return ParseResult.setResult(0);
+}
+
+int EDInst::branchTargetID() {
+  if (parseOperands())
+    return -1;
+  return BranchTarget;
+}
+
+int EDInst::moveSourceID() {
+  if (parseOperands())
+    return -1;
+  return MoveSource;
+}
+
+int EDInst::moveTargetID() {
+  if (parseOperands())
+    return -1;
+  return MoveTarget;
+}
+
+int EDInst::numOperands() {
+  if (parseOperands())
+    return -1;
+  return Operands.size();
+}
+
+int EDInst::getOperand(EDOperand *&operand, unsigned int index) {
+  if (parseOperands())
+    return -1;
+  
+  if (index >= Operands.size())
+    return -1;
+  
+  operand = Operands[index];
+  return 0;
+}
+
+int EDInst::tokenize() {
+  if (TokenizeResult.valid())
+    return TokenizeResult.result();
+  
+  if (stringify())
+    return TokenizeResult.setResult(-1);
+    
+  return TokenizeResult.setResult(EDToken::tokenize(Tokens,
+                                                    String,
+                                                    OperandOrder,
+                                                    Disassembler));
+    
+}
+
+int EDInst::numTokens() {
+  if (tokenize())
+    return -1;
+  return Tokens.size();
+}
+
+int EDInst::getToken(EDToken *&token, unsigned int index) {
+  if (tokenize())
+    return -1;
+  token = Tokens[index];
+  return 0;
+}
+
+#ifdef __BLOCKS__
+int EDInst::visitTokens(EDTokenVisitor_t visitor) {
+  if (tokenize())
+    return -1;
+  
+  tokvec_t::iterator iter;
+  
+  for (iter = Tokens.begin(); iter != Tokens.end(); ++iter) {
+    int ret = visitor(*iter);
+    if (ret == 1)
+      return 0;
+    if (ret != 0)
+      return -1;
+  }
+  
+  return 0;
+}
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDInst.h b/libclamav/c++/llvm/tools/edis/EDInst.h
new file mode 100644
index 0000000..db03a78
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDInst.h
@@ -0,0 +1,171 @@
+//===-EDInst.h - LLVM Enhanced Disassembler ---------------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for the Enhanced Disassembly library's
+// instruction class.  The instruction is responsible for vending the string
+// representation, individual tokens and operands for a single instruction.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef EDInst_
+#define EDInst_
+
+#include "llvm-c/EnhancedDisassembly.h"
+
+#include "llvm/ADT/SmallVector.h"
+
+#include <string>
+#include <vector>
+
+/// CachedResult - Encapsulates the result of a function along with the validity
+///   of that result, so that slow functions don't need to run twice
+struct CachedResult {
+  /// True if the result has been obtained by executing the function
+  bool Valid;
+  /// The result last obtained from the function
+  int Result;
+  
+  /// Constructor - Initializes an invalid result
+  CachedResult() : Valid(false) { }
+  /// valid - Returns true if the result has been obtained by executing the
+  ///   function and false otherwise
+  bool valid() { return Valid; }
+  /// result - Returns the result of the function or an undefined value if
+  ///   valid() is false
+  int result() { return Result; }
+  /// setResult - Sets the result of the function and declares it valid
+  ///   returning the result (so that setResult() can be called from inside a
+  ///   return statement)
+  /// @arg result - The result of the function
+  int setResult(int result) { Result = result; Valid = true; return result; }
+};
+
+/// EDInst - Encapsulates a single instruction, which can be queried for its
+///   string representation, as well as its operands and tokens
+struct EDInst {
+  /// The parent disassembler
+  EDDisassembler &Disassembler;
+  /// The containing MCInst
+  llvm::MCInst *Inst;
+  /// The instruction information provided by TableGen for this instruction
+  const InstInfo *ThisInstInfo;
+  /// The number of bytes for the machine code representation of the instruction
+  uint64_t ByteSize;
+  
+  /// The result of the stringify() function
+  CachedResult StringifyResult;
+  /// The string representation of the instruction
+  std::string String;
+  /// The order in which operands from the InstInfo's operand information appear
+  /// in String
+  const char* OperandOrder;
+  
+  /// The result of the parseOperands() function
+  CachedResult ParseResult;
+  typedef llvm::SmallVector<EDOperand*, 5> opvec_t;
+  /// The instruction's operands
+  opvec_t Operands;
+  /// The operand corresponding to the target, if the instruction is a branch
+  int BranchTarget;
+  /// The operand corresponding to the source, if the instruction is a move
+  int MoveSource;
+  /// The operand corresponding to the target, if the instruction is a move
+  int MoveTarget;
+  
+  /// The result of the tokenize() function
+  CachedResult TokenizeResult;
+  typedef std::vector<EDToken*> tokvec_t;
+  /// The instruction's tokens
+  tokvec_t Tokens;
+  
+  /// Constructor - initializes an instruction given the output of the LLVM
+  ///   C++ disassembler
+  ///
+  /// @arg inst         - The MCInst, which will now be owned by this object
+  /// @arg byteSize     - The size of the consumed instruction, in bytes
+  /// @arg disassembler - The parent disassembler
+  /// @arg instInfo     - The instruction information produced by the table
+  ///                     generator for this instruction
+  EDInst(llvm::MCInst *inst,
+         uint64_t byteSize,
+         EDDisassembler &disassembler,
+         const InstInfo *instInfo);
+  ~EDInst();
+  
+  /// byteSize - returns the number of bytes consumed by the machine code
+  ///   representation of the instruction
+  uint64_t byteSize();
+  /// instID - returns the LLVM instruction ID of the instruction
+  unsigned instID();
+  
+  /// stringify - populates the String and AsmString members of the instruction,
+  ///   returning 0 on success or -1 otherwise
+  int stringify();
+  /// getString - retrieves a pointer to the string representation of the
+  ///   instructinon, returning 0 on success or -1 otherwise
+  ///
+  /// @arg str - A reference to a pointer that, on success, is set to point to
+  ///   the string representation of the instruction; this string is still owned
+  ///   by the instruction and will be deleted when it is
+  int getString(const char *&str);
+  
+  /// isBranch - Returns true if the instruction is a branch
+  bool isBranch();
+  /// isMove - Returns true if the instruction is a move
+  bool isMove();
+  
+  /// parseOperands - populates the Operands member of the instruction,
+  ///   returning 0 on success or -1 otherwise
+  int parseOperands();
+  /// branchTargetID - returns the ID (suitable for use with getOperand()) of 
+  ///   the target operand if the instruction is a branch, or -1 otherwise
+  int branchTargetID();
+  /// moveSourceID - returns the ID of the source operand if the instruction
+  ///   is a move, or -1 otherwise
+  int moveSourceID();
+  /// moveTargetID - returns the ID of the target operand if the instruction
+  ///   is a move, or -1 otherwise
+  int moveTargetID();
+  
+  /// numOperands - returns the number of operands available to retrieve, or -1
+  ///   on error
+  int numOperands();
+  /// getOperand - retrieves an operand from the instruction's operand list by
+  ///   index, returning 0 on success or -1 on error
+  ///
+  /// @arg operand  - A reference whose target is pointed at the operand on
+  ///                 success, although the operand is still owned by the EDInst
+  /// @arg index    - The index of the operand in the instruction
+  int getOperand(EDOperand *&operand, unsigned int index);
+
+  /// tokenize - populates the Tokens member of the instruction, returning 0 on
+  ///   success or -1 otherwise
+  int tokenize();
+  /// numTokens - returns the number of tokens in the instruction, or -1 on
+  ///   error
+  int numTokens();
+  /// getToken - retrieves a token from the instruction's token list by index,
+  ///   returning 0 on success or -1 on error
+  ///
+  /// @arg token  - A reference whose target is pointed at the token on success,
+  ///               although the token is still owned by the EDInst
+  /// @arg index  - The index of the token in the instrcutino
+  int getToken(EDToken *&token, unsigned int index);
+
+#ifdef __BLOCKS__
+  /// visitTokens - Visits each token in turn and applies a block to it,
+  ///   returning 0 if all blocks are visited and/or the block signals
+  ///   termination by returning 1; returns -1 on error
+  ///
+  /// @arg visitor  - The visitor block to apply to all tokens.
+  int visitTokens(EDTokenVisitor_t visitor);
+#endif
+};
+
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDMain.cpp b/libclamav/c++/llvm/tools/edis/EDMain.cpp
new file mode 100644
index 0000000..3585657
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDMain.cpp
@@ -0,0 +1,293 @@
+//===-EDMain.cpp - LLVM Enhanced Disassembly C API ------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file implements the enhanced disassembler's public C API.
+//
+//===----------------------------------------------------------------------===//
+
+#include "EDDisassembler.h"
+#include "EDInst.h"
+#include "EDOperand.h"
+#include "EDToken.h"
+
+#include "llvm-c/EnhancedDisassembly.h"
+
+int EDGetDisassembler(EDDisassemblerRef *disassembler,
+                      const char *triple,
+                      EDAssemblySyntax_t syntax) {
+  EDDisassembler::initialize();
+  
+  EDDisassemblerRef ret = EDDisassembler::getDisassembler(triple,
+                                                          syntax);
+  
+  if (ret) {
+    *disassembler = ret;
+    return 0;
+  }
+  else {
+    return -1;
+  }
+}
+
+int EDGetRegisterName(const char** regName,
+                      EDDisassemblerRef disassembler,
+                      unsigned regID) {
+  const char* name = disassembler->nameWithRegisterID(regID);
+  if(!name)
+    return -1;
+  *regName = name;
+  return 0;
+}
+
+int EDRegisterIsStackPointer(EDDisassemblerRef disassembler,
+                             unsigned regID) {
+  return disassembler->registerIsStackPointer(regID) ? 1 : 0;
+}
+
+int EDRegisterIsProgramCounter(EDDisassemblerRef disassembler,
+                               unsigned regID) {
+  return disassembler->registerIsProgramCounter(regID) ? 1 : 0;
+}
+
+unsigned int EDCreateInsts(EDInstRef *insts,
+                           unsigned int count,
+                           EDDisassemblerRef disassembler,
+                           EDByteReaderCallback byteReader,
+                           uint64_t address,
+                           void *arg) {
+  unsigned int index;
+  
+  for (index = 0; index < count; index++) {
+    EDInst *inst = disassembler->createInst(byteReader, address, arg);
+    
+    if(!inst)
+      return index;
+    
+    insts[index] = inst;
+    address += inst->byteSize();
+  }
+  
+  return count;
+}
+
+void EDReleaseInst(EDInstRef inst) {
+  delete inst;
+}
+
+int EDInstByteSize(EDInstRef inst) {
+  return inst->byteSize();
+}
+
+int EDGetInstString(const char **buf,
+                    EDInstRef inst) {
+  return inst->getString(*buf);
+}
+
+int EDInstID(unsigned *instID, EDInstRef inst) {
+  *instID = inst->instID();
+  return 0;
+}
+
+int EDInstIsBranch(EDInstRef inst) {
+  return inst->isBranch();
+}
+
+int EDInstIsMove(EDInstRef inst) {
+  return inst->isMove();
+}
+
+int EDBranchTargetID(EDInstRef inst) {
+  return inst->branchTargetID();
+}
+
+int EDMoveSourceID(EDInstRef inst) {
+  return inst->moveSourceID();
+}
+
+int EDMoveTargetID(EDInstRef inst) {
+  return inst->moveTargetID();
+}
+
+int EDNumTokens(EDInstRef inst) {
+  return inst->numTokens();
+}
+
+int EDGetToken(EDTokenRef *token,
+               EDInstRef inst,
+               int index) {
+  return inst->getToken(*token, index);
+}
+
+int EDGetTokenString(const char **buf,
+                     EDTokenRef token) {
+  return token->getString(*buf);
+}
+
+int EDOperandIndexForToken(EDTokenRef token) {
+  return token->operandID();
+}
+
+int EDTokenIsWhitespace(EDTokenRef token) {
+  if(token->type() == EDToken::kTokenWhitespace)
+    return 1;
+  else
+    return 0;
+}
+
+int EDTokenIsPunctuation(EDTokenRef token) {
+  if(token->type() == EDToken::kTokenPunctuation)
+    return 1;
+  else
+    return 0;
+}
+
+int EDTokenIsOpcode(EDTokenRef token) {
+  if(token->type() == EDToken::kTokenOpcode)
+    return 1;
+  else
+    return 0;
+}
+
+int EDTokenIsLiteral(EDTokenRef token) {
+  if(token->type() == EDToken::kTokenLiteral)
+    return 1;
+  else
+    return 0;
+}
+
+int EDTokenIsRegister(EDTokenRef token) {
+  if(token->type() == EDToken::kTokenRegister)
+    return 1;
+  else
+    return 0;
+}
+
+int EDTokenIsNegativeLiteral(EDTokenRef token) {
+  if(token->type() != EDToken::kTokenLiteral)
+    return -1;
+  
+  return token->literalSign();
+}
+
+int EDLiteralTokenAbsoluteValue(uint64_t *value,
+                                EDTokenRef token) {
+  if(token->type() != EDToken::kTokenLiteral)
+    return -1;
+  
+  return token->literalAbsoluteValue(*value);
+}
+
+int EDRegisterTokenValue(unsigned *registerID,
+                         EDTokenRef token) {
+  if(token->type() != EDToken::kTokenRegister)
+    return -1;
+  
+  return token->registerID(*registerID);
+}
+
+int EDNumOperands(EDInstRef inst) {
+  return inst->numOperands();
+}
+
+int EDGetOperand(EDOperandRef *operand,
+                 EDInstRef inst,
+                 int index) {
+  return inst->getOperand(*operand, index);
+}
+
+int EDOperandIsRegister(EDOperandRef operand) {
+  return operand->isRegister();
+}
+
+int EDOperandIsImmediate(EDOperandRef operand) {
+  return operand->isImmediate();
+}
+
+int EDOperandIsMemory(EDOperandRef operand) {
+  return operand->isMemory();
+}
+
+int EDRegisterOperandValue(unsigned *value, 
+                           EDOperandRef operand) {
+  if(!operand->isRegister())
+    return -1;
+  *value = operand->regVal();
+  return 0;
+}
+
+int EDImmediateOperandValue(uint64_t *value,
+                           EDOperandRef operand) {
+  if(!operand->isImmediate())
+    return -1;
+  *value = operand->immediateVal();
+  return 0;
+}
+
+int EDEvaluateOperand(uint64_t *result,
+                      EDOperandRef operand,
+                      EDRegisterReaderCallback regReader,
+                      void *arg) {
+  return operand->evaluate(*result, regReader, arg);
+}
+
+#ifdef __BLOCKS__
+
+struct ByteReaderWrapper {
+  EDByteBlock_t byteBlock;
+};
+
+static int readerWrapperCallback(uint8_t *byte, 
+                          uint64_t address,
+                          void *arg) {
+  struct ByteReaderWrapper *wrapper = (struct ByteReaderWrapper *)arg;
+  return wrapper->byteBlock(byte, address);
+}
+
+unsigned int EDBlockCreateInsts(EDInstRef *insts,
+                                int count,
+                                EDDisassemblerRef disassembler,
+                                EDByteBlock_t byteBlock,
+                                uint64_t address) {
+  struct ByteReaderWrapper wrapper;
+  wrapper.byteBlock = byteBlock;
+  
+  return EDCreateInsts(insts,
+                       count,
+                       disassembler, 
+                       readerWrapperCallback, 
+                       address, 
+                       (void*)&wrapper);
+}
+
+int EDBlockEvaluateOperand(uint64_t *result,
+                           EDOperandRef operand,
+                           EDRegisterBlock_t regBlock) {
+  return operand->evaluate(*result, regBlock);
+}
+
+int EDBlockVisitTokens(EDInstRef inst,
+                       EDTokenVisitor_t visitor) {
+  return inst->visitTokens(visitor);
+}
+
+#else
+
+extern "C" unsigned int EDBlockCreateInsts() {
+  return 0;
+}
+
+extern "C" int EDBlockEvaluateOperand() {
+  return -1;
+}
+
+extern "C" int EDBlockVisitTokens() {
+  return -1;
+}
+
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDOperand.cpp b/libclamav/c++/llvm/tools/edis/EDOperand.cpp
new file mode 100644
index 0000000..da6797e
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDOperand.cpp
@@ -0,0 +1,168 @@
+//===-EDOperand.cpp - LLVM Enhanced Disassembler --------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file implements the Enhanced Disassembly library's operand class.  The
+// operand is responsible for allowing evaluation given a particular register 
+// context.
+//
+//===----------------------------------------------------------------------===//
+
+#include "EDDisassembler.h"
+#include "EDInst.h"
+#include "EDOperand.h"
+
+#include "llvm/MC/MCInst.h"
+
+using namespace llvm;
+
+EDOperand::EDOperand(const EDDisassembler &disassembler,
+                     const EDInst &inst,
+                     unsigned int opIndex,
+                     unsigned int &mcOpIndex) :
+  Disassembler(disassembler),
+  Inst(inst),
+  OpIndex(opIndex),
+  MCOpIndex(mcOpIndex) {
+  unsigned int numMCOperands = 0;
+    
+  if(Disassembler.Key.Arch == Triple::x86 ||
+     Disassembler.Key.Arch == Triple::x86_64) {
+    uint8_t operandFlags = inst.ThisInstInfo->operandFlags[opIndex];
+    
+    if (operandFlags & kOperandFlagImmediate) {
+      numMCOperands = 1;
+    }
+    else if (operandFlags & kOperandFlagRegister) {
+      numMCOperands = 1;
+    }
+    else if (operandFlags & kOperandFlagMemory) {
+      if (operandFlags & kOperandFlagPCRelative) {
+        numMCOperands = 1;
+      }
+      else {
+        numMCOperands = 5;
+      }
+    }
+    else if (operandFlags & kOperandFlagEffectiveAddress) {
+      numMCOperands = 4;
+    }
+  }
+    
+  mcOpIndex += numMCOperands;
+}
+
+EDOperand::~EDOperand() {
+}
+
+int EDOperand::evaluate(uint64_t &result,
+                        EDRegisterReaderCallback callback,
+                        void *arg) {
+  if (Disassembler.Key.Arch == Triple::x86 ||
+      Disassembler.Key.Arch == Triple::x86_64) {
+    uint8_t operandFlags = Inst.ThisInstInfo->operandFlags[OpIndex];
+    
+    if (operandFlags & kOperandFlagImmediate) {
+      result = Inst.Inst->getOperand(MCOpIndex).getImm();
+      return 0;
+    }
+    if (operandFlags & kOperandFlagRegister) {
+      unsigned reg = Inst.Inst->getOperand(MCOpIndex).getReg();
+      return callback(&result, reg, arg);
+    }
+    if (operandFlags & kOperandFlagMemory ||
+        operandFlags & kOperandFlagEffectiveAddress){
+      if(operandFlags & kOperandFlagPCRelative) {
+        int64_t displacement = Inst.Inst->getOperand(MCOpIndex).getImm();
+        
+        uint64_t ripVal;
+        
+        // TODO fix how we do this
+        
+        if (callback(&ripVal, Disassembler.registerIDWithName("RIP"), arg))
+          return -1;
+        
+        result = ripVal + displacement;
+        return 0;
+      }
+      else {
+        unsigned baseReg = Inst.Inst->getOperand(MCOpIndex).getReg();
+        uint64_t scaleAmount = Inst.Inst->getOperand(MCOpIndex+1).getImm();
+        unsigned indexReg = Inst.Inst->getOperand(MCOpIndex+2).getReg();
+        int64_t displacement = Inst.Inst->getOperand(MCOpIndex+3).getImm();
+        //unsigned segmentReg = Inst.Inst->getOperand(MCOpIndex+4).getReg();
+      
+        uint64_t addr = 0;
+        
+        if(baseReg) {
+          uint64_t baseVal;
+          if (callback(&baseVal, baseReg, arg))
+            return -1;
+          addr += baseVal;
+        }
+        
+        if(indexReg) {
+          uint64_t indexVal;
+          if (callback(&indexVal, indexReg, arg))
+            return -1;
+          addr += (scaleAmount * indexVal);
+        }
+        
+        addr += displacement;
+        
+        result = addr;
+        return 0;
+      }
+    }
+    return -1;
+  }
+  
+  return -1;
+}
+
+int EDOperand::isRegister() {
+  return(Inst.ThisInstInfo->operandFlags[OpIndex] & kOperandFlagRegister);
+}
+
+unsigned EDOperand::regVal() {
+  return Inst.Inst->getOperand(MCOpIndex).getReg(); 
+}
+
+int EDOperand::isImmediate() {
+  return(Inst.ThisInstInfo->operandFlags[OpIndex] & kOperandFlagImmediate);
+}
+
+uint64_t EDOperand::immediateVal() {
+  return Inst.Inst->getOperand(MCOpIndex).getImm();
+}
+
+int EDOperand::isMemory() {
+  return(Inst.ThisInstInfo->operandFlags[OpIndex] & kOperandFlagMemory);
+}
+
+#ifdef __BLOCKS__
+struct RegisterReaderWrapper {
+  EDRegisterBlock_t regBlock;
+};
+
+int readerWrapperCallback(uint64_t *value, 
+                          unsigned regID, 
+                          void *arg) {
+  struct RegisterReaderWrapper *wrapper = (struct RegisterReaderWrapper *)arg;
+  return wrapper->regBlock(value, regID);
+}
+
+int EDOperand::evaluate(uint64_t &result,
+                        EDRegisterBlock_t regBlock) {
+  struct RegisterReaderWrapper wrapper;
+  wrapper.regBlock = regBlock;
+  return evaluate(result, 
+                  readerWrapperCallback, 
+                  (void*)&wrapper);
+}
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDOperand.h b/libclamav/c++/llvm/tools/edis/EDOperand.h
new file mode 100644
index 0000000..ad9345b
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDOperand.h
@@ -0,0 +1,78 @@
+//===-EDOperand.h - LLVM Enhanced Disassembler ------------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for the Enhanced Disassembly library's 
+// operand class.  The operand is responsible for allowing evaluation given a
+// particular register context.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef EDOperand_
+#define EDOperand_
+
+#include "llvm-c/EnhancedDisassembly.h"
+
+/// EDOperand - Encapsulates a single operand, which can be evaluated by the
+///   client
+struct EDOperand {
+  /// The parent disassembler
+  const EDDisassembler &Disassembler;
+  /// The parent instruction
+  const EDInst &Inst;
+  
+  /// The index of the operand in the EDInst
+  unsigned int OpIndex;
+  /// The index of the first component of the operand in the MCInst
+  unsigned int MCOpIndex;
+  
+  /// Constructor - Initializes an EDOperand
+  ///
+  /// @arg disassembler - The disassembler responsible for the operand
+  /// @arg inst         - The instruction containing this operand
+  /// @arg opIndex      - The index of the operand in inst
+  /// @arg mcOpIndex    - The index of the operand in the original MCInst
+  EDOperand(const EDDisassembler &disassembler,
+            const EDInst &inst,
+            unsigned int opIndex,
+            unsigned int &mcOpIndex);
+  ~EDOperand();
+  
+  /// evaluate - Returns the numeric value of an operand to the extent possible,
+  ///   returning 0 on success or -1 if there was some problem (such as a 
+  ///   register not being readable)
+  ///
+  /// @arg result   - A reference whose target is filled in with the value of
+  ///                 the operand (the address if it is a memory operand)
+  /// @arg callback - A function to call to obtain register values
+  /// @arg arg      - An opaque argument to pass to callback
+  int evaluate(uint64_t &result,
+               EDRegisterReaderCallback callback,
+               void *arg);
+
+  /// isRegister - Returns 1 if the operand is a register or 0 otherwise
+  int isRegister();
+  /// regVal - Returns the register value.
+  unsigned regVal();
+  
+  /// isImmediate - Returns 1 if the operand is an immediate or 0 otherwise
+  int isImmediate();
+  /// immediateVal - Returns the immediate value.
+  uint64_t immediateVal();
+  
+  /// isMemory - Returns 1 if the operand is a memory location or 0 otherwise
+  int isMemory();
+  
+#ifdef __BLOCKS__
+  /// evaluate - Like evaluate for a callback, but uses a block instead
+  int evaluate(uint64_t &result,
+               EDRegisterBlock_t regBlock);
+#endif
+};
+
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EDToken.cpp b/libclamav/c++/llvm/tools/edis/EDToken.cpp
new file mode 100644
index 0000000..cd79152
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDToken.cpp
@@ -0,0 +1,208 @@
+//===-EDToken.cpp - LLVM Enhanced Disassembler ----------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file implements the Enhanced Disassembler library's token class.  The
+// token is responsible for vending information about the token, such as its
+// type and logical value.
+//
+//===----------------------------------------------------------------------===//
+
+#include "EDDisassembler.h"
+#include "EDToken.h"
+
+#include "llvm/ADT/SmallVector.h"
+#include "llvm/MC/MCParser/MCAsmLexer.h"
+#include "llvm/MC/MCParser/MCParsedAsmOperand.h"
+
+using namespace llvm;
+
+EDToken::EDToken(StringRef str,
+                 enum tokenType type,
+                 uint64_t localType,
+                 EDDisassembler &disassembler) :
+  Disassembler(disassembler),
+  Str(str),
+  Type(type),
+  LocalType(localType),
+  OperandID(-1) {
+}
+
+EDToken::~EDToken() {
+}
+
+void EDToken::makeLiteral(bool sign, uint64_t absoluteValue) {
+  Type = kTokenLiteral;
+  LiteralSign = sign;
+  LiteralAbsoluteValue = absoluteValue;
+}
+
+void EDToken::makeRegister(unsigned registerID) {
+  Type = kTokenRegister;
+  RegisterID = registerID;
+}
+
+void EDToken::setOperandID(int operandID) {
+  OperandID = operandID;
+}
+
+enum EDToken::tokenType EDToken::type() const {
+  return Type;
+}
+
+uint64_t EDToken::localType() const {
+  return LocalType;
+}
+
+StringRef EDToken::string() const {
+  return Str;
+}
+
+int EDToken::operandID() const {
+  return OperandID;
+}
+
+int EDToken::literalSign() const {
+  if(Type != kTokenLiteral)
+    return -1;
+  return (LiteralSign ? 1 : 0);
+}
+
+int EDToken::literalAbsoluteValue(uint64_t &value) const {
+  if(Type != kTokenLiteral)
+    return -1;
+  value = LiteralAbsoluteValue;
+  return 0;
+}
+
+int EDToken::registerID(unsigned &registerID) const {
+  if(Type != kTokenRegister)
+    return -1;
+  registerID = RegisterID;
+  return 0;
+}
+
+int EDToken::tokenize(std::vector<EDToken*> &tokens,
+                      std::string &str,
+                      const char *operandOrder,
+                      EDDisassembler &disassembler) {
+  SmallVector<MCParsedAsmOperand*, 5> parsedOperands;
+  SmallVector<AsmToken, 10> asmTokens;
+  
+  if(disassembler.parseInst(parsedOperands, asmTokens, str))
+    return -1;
+  
+  SmallVectorImpl<MCParsedAsmOperand*>::iterator operandIterator;
+  unsigned int operandIndex;
+  SmallVectorImpl<AsmToken>::iterator tokenIterator;
+  
+  operandIterator = parsedOperands.begin();
+  operandIndex = 0;
+  
+  bool readOpcode = false;
+  
+  const char *wsPointer = asmTokens.begin()->getLoc().getPointer();
+  
+  for (tokenIterator = asmTokens.begin();
+       tokenIterator != asmTokens.end();
+       ++tokenIterator) {
+    SMLoc tokenLoc = tokenIterator->getLoc();
+    
+    const char *tokenPointer = tokenLoc.getPointer();
+    
+    if(tokenPointer > wsPointer) {
+      unsigned long wsLength = tokenPointer - wsPointer;
+      
+      EDToken *whitespaceToken = new EDToken(StringRef(wsPointer, wsLength),
+                                             EDToken::kTokenWhitespace,
+                                             0,
+                                             disassembler);
+      
+      tokens.push_back(whitespaceToken);
+    }
+    
+    wsPointer = tokenPointer + tokenIterator->getString().size();
+    
+    while (operandIterator != parsedOperands.end() &&
+           tokenLoc.getPointer() > 
+           (*operandIterator)->getEndLoc().getPointer()) {
+      ++operandIterator;
+      ++operandIndex;
+    }
+    
+    EDToken *token;
+    
+    switch (tokenIterator->getKind()) {
+    case AsmToken::Identifier:
+      if (!readOpcode) {
+        token = new EDToken(tokenIterator->getString(),
+                            EDToken::kTokenOpcode,
+                            (uint64_t)tokenIterator->getKind(),
+                            disassembler);
+        readOpcode = true;
+        break;
+      }
+      // any identifier that isn't an opcode is mere punctuation; so we fall
+      // through
+    default:
+      token = new EDToken(tokenIterator->getString(),
+                          EDToken::kTokenPunctuation,
+                          (uint64_t)tokenIterator->getKind(),
+                          disassembler);
+      break;
+    case AsmToken::Integer:
+    {
+      token = new EDToken(tokenIterator->getString(),
+                          EDToken::kTokenLiteral,
+                          (uint64_t)tokenIterator->getKind(),
+                          disassembler);
+        
+      int64_t intVal = tokenIterator->getIntVal();
+      
+      if(intVal < 0)  
+        token->makeLiteral(true, -intVal);
+      else
+        token->makeLiteral(false, intVal);
+      break;
+    }
+    case AsmToken::Register:
+    {
+      token = new EDToken(tokenIterator->getString(),
+                          EDToken::kTokenLiteral,
+                          (uint64_t)tokenIterator->getKind(),
+                          disassembler);
+      
+      token->makeRegister((unsigned)tokenIterator->getRegVal());
+      break;
+    }
+    }
+    
+    if(operandIterator != parsedOperands.end() &&
+       tokenLoc.getPointer() >= 
+       (*operandIterator)->getStartLoc().getPointer()) {
+      /// operandIndex == 0 means the operand is the instruction (which the
+      /// AsmParser treats as an operand but edis does not).  We therefore skip
+      /// operandIndex == 0 and subtract 1 from all other operand indices.
+      
+      if(operandIndex > 0)
+        token->setOperandID(operandOrder[operandIndex - 1]);
+    }
+    
+    tokens.push_back(token);
+  }
+  
+  return 0;
+}
+
+int EDToken::getString(const char*& buf) {
+  if(PermStr.length() == 0) {
+    PermStr = Str.str();
+  }
+  buf = PermStr.c_str();
+  return 0;
+}
diff --git a/libclamav/c++/llvm/tools/edis/EDToken.h b/libclamav/c++/llvm/tools/edis/EDToken.h
new file mode 100644
index 0000000..e4ae91f
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EDToken.h
@@ -0,0 +1,135 @@
+//===-EDToken.h - LLVM Enhanced Disassembler --------------------*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+// 
+//===----------------------------------------------------------------------===//
+//
+// This file defines the interface for the Enhanced Disassembly library's token
+// class.  The token is responsible for vending information about the token, 
+// such as its type and logical value.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef EDToken_
+#define EDToken_
+
+#include "llvm-c/EnhancedDisassembly.h"
+#include "llvm/ADT/StringRef.h"
+
+#include <string>
+#include <vector>
+
+/// EDToken - Encapsulates a single token, which can provide a string
+///   representation of itself or interpret itself in various ways, depending
+///   on the token type.
+struct EDToken {
+  enum tokenType {
+    kTokenWhitespace,
+    kTokenOpcode,
+    kTokenLiteral,
+    kTokenRegister,
+    kTokenPunctuation
+  };
+  
+  /// The parent disassembler
+  EDDisassembler &Disassembler;
+
+  /// The token's string representation
+  llvm::StringRef Str;
+  /// The token's string representation, but in a form suitable for export
+  std::string PermStr;
+  /// The type of the token, as exposed through the external API
+  enum tokenType Type;
+  /// The type of the token, as recorded by the syntax-specific tokenizer
+  uint64_t LocalType;
+  /// The operand corresponding to the token, or (unsigned int)-1 if not
+  ///   part of an operand.
+  int OperandID;
+  
+  /// The sign if the token is a literal (1 if negative, 0 otherwise)
+  bool LiteralSign;
+  /// The absolute value if the token is a literal
+  uint64_t LiteralAbsoluteValue;
+  /// The LLVM register ID if the token is a register name
+  unsigned RegisterID;
+  
+  /// Constructor - Initializes an EDToken with the information common to all
+  ///   tokens
+  ///
+  /// @arg str          - The string corresponding to the token
+  /// @arg type         - The token's type as exposed through the public API
+  /// @arg localType    - The token's type as recorded by the tokenizer
+  /// @arg disassembler - The disassembler responsible for the token
+  EDToken(llvm::StringRef str,
+          enum tokenType type,
+          uint64_t localType,
+          EDDisassembler &disassembler);
+  
+  /// makeLiteral - Adds the information specific to a literal
+  /// @arg sign           - The sign of the literal (1 if negative, 0 
+  ///                       otherwise)
+  ///
+  /// @arg absoluteValue  - The absolute value of the literal
+  void makeLiteral(bool sign, uint64_t absoluteValue);
+  /// makeRegister - Adds the information specific to a register
+  ///
+  /// @arg registerID - The LLVM register ID
+  void makeRegister(unsigned registerID);
+  
+  /// setOperandID - Links the token to a numbered operand
+  ///
+  /// @arg operandID  - The operand ID to link to
+  void setOperandID(int operandID);
+  
+  ~EDToken();
+  
+  /// type - Returns the public type of the token
+  enum tokenType type() const;
+  /// localType - Returns the tokenizer-specific type of the token
+  uint64_t localType() const;
+  /// string - Returns the string representation of the token
+  llvm::StringRef string() const;
+  /// operandID - Returns the operand ID of the token
+  int operandID() const;
+  
+  /// literalSign - Returns the sign of the token 
+  ///   (1 if negative, 0 if positive or unsigned, -1 if it is not a literal)
+  int literalSign() const;
+  /// literalAbsoluteValue - Retrieves the absolute value of the token, and
+  ///   returns -1 if the token is not a literal
+  /// @arg value  - A reference to a value that is filled in with the absolute
+  ///               value, if it is valid
+  int literalAbsoluteValue(uint64_t &value) const;
+  /// registerID - Retrieves the register ID of the token, and returns -1 if the
+  ///   token is not a register
+  ///
+  /// @arg registerID - A reference to a value that is filled in with the 
+  ///                   register ID, if it is valid
+  int registerID(unsigned &registerID) const;
+  
+  /// tokenize - Tokenizes a string using the platform- and syntax-specific
+  ///   tokenizer, and returns 0 on success (-1 on failure)
+  ///
+  /// @arg tokens       - A vector that will be filled in with pointers to
+  ///                     allocated tokens
+  /// @arg str          - The string, as outputted by the AsmPrinter
+  /// @arg operandOrder - The order of the operands from the operandFlags array
+  ///                     as they appear in str
+  /// @arg disassembler - The disassembler for the desired target and
+  //                      assembly syntax
+  static int tokenize(std::vector<EDToken*> &tokens,
+                      std::string &str,
+                      const char *operandOrder,
+                      EDDisassembler &disassembler);
+  
+  /// getString - Directs a character pointer to the string, returning 0 on
+  ///   success (-1 on failure)
+  /// @arg buf  - A reference to a pointer that is set to point to the string.
+  ///   The string is still owned by the token.
+  int getString(const char*& buf);
+};
+
+#endif
diff --git a/libclamav/c++/llvm/tools/edis/EnhancedDisassembly.exports b/libclamav/c++/llvm/tools/edis/EnhancedDisassembly.exports
new file mode 100644
index 0000000..d3f8743
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/EnhancedDisassembly.exports
@@ -0,0 +1,36 @@
+_EDGetDisassembler
+_EDGetRegisterName
+_EDRegisterIsStackPointer
+_EDRegisterIsProgramCounter
+_EDCreateInsts
+_EDReleaseInst
+_EDInstByteSize
+_EDGetInstString
+_EDInstIsBranch
+_EDInstIsMove
+_EDBranchTargetID
+_EDMoveSourceID
+_EDMoveTargetID
+_EDNumTokens
+_EDGetToken
+_EDGetTokenString
+_EDOperandIndexForToken
+_EDTokenIsWhitespace
+_EDTokenIsPunctuation
+_EDTokenIsOpcode
+_EDTokenIsLiteral
+_EDTokenIsRegister
+_EDTokenIsNegativeLiteral
+_EDLiteralTokenAbsoluteValue
+_EDRegisterTokenValue
+_EDNumOperands
+_EDGetOperand
+_EDOperandIsRegister
+_EDOperandIsImmediate
+_EDOperandIsMemory
+_EDRegisterOperandValue
+_EDImmediateOperandValue
+_EDEvaluateOperand
+_EDBlockCreateInsts
+_EDBlockEvaluateOperand
+_EDBlockVisitTokens
diff --git a/libclamav/c++/llvm/tools/edis/Makefile b/libclamav/c++/llvm/tools/edis/Makefile
new file mode 100644
index 0000000..a3c5879
--- /dev/null
+++ b/libclamav/c++/llvm/tools/edis/Makefile
@@ -0,0 +1,55 @@
+##===- tools/ed/Makefile -----------------------------------*- Makefile -*-===##
+# 
+#                     The LLVM Compiler Infrastructure
+#
+# This file is distributed under the University of Illinois Open Source
+# License. See LICENSE.TXT for details.
+# 
+##===----------------------------------------------------------------------===##
+
+LEVEL = ../..
+LIBRARYNAME = EnhancedDisassembly
+
+BUILT_SOURCES = EDInfo.inc
+
+# Include this here so we can get the configuration of the targets
+# that have been configured for construction. We have to do this 
+# early so we can set up LINK_COMPONENTS before including Makefile.rules
+include $(LEVEL)/Makefile.config
+
+LINK_LIBS_IN_SHARED = 1
+SHARED_LIBRARY = 1
+
+LINK_COMPONENTS := $(TARGETS_TO_BUILD) x86asmprinter x86disassembler
+
+include $(LEVEL)/Makefile.common
+
+ifeq ($(HOST_OS),Darwin)
+    # set dylib internal version number to llvmCore submission number
+    ifdef LLVM_SUBMIT_VERSION
+        LLVMLibsOptions := $(LLVMLibsOptions) -Wl,-current_version \
+                        -Wl,$(LLVM_SUBMIT_VERSION).$(LLVM_SUBMIT_SUBVERSION) \
+                        -Wl,-compatibility_version -Wl,1
+    endif
+    # extra options to override libtool defaults 
+    LLVMLibsOptions    := $(LLVMLibsOptions)  \
+                         -avoid-version \
+                         -Wl,-exported_symbols_list -Wl,$(PROJ_SRC_DIR)/EnhancedDisassembly.exports \
+                         -Wl,-dead_strip \
+                         -Wl,-seg1addr -Wl,0xE0000000 
+
+    # Mac OS X 10.4 and earlier tools do not allow a second -install_name on command line
+    DARWIN_VERS := $(shell echo $(TARGET_TRIPLE) | sed 's/.*darwin\([0-9]*\).*/\1/')
+    ifneq ($(DARWIN_VERS),8)
+       LLVMLibsOptions    := $(LLVMLibsOptions)  \
+                            -no-undefined -Wl,-install_name \
+                            -Wl,"@executable_path/../lib/lib$(LIBRARYNAME)$(SHLIBEXT)"
+    endif
+endif
+
+EDInfo.inc:	$(TBLGEN)
+	$(Echo) "Building semantic information header"
+	$(Verb) $(TableGen) -o $(call SYSPATH, $@) -gen-enhanced-disassembly-header /dev/null
+
+clean::
+	-$(Verb) $(RM) -f EDInfo.inc
diff --git a/libclamav/c++/llvm/tools/llc/llc.cpp b/libclamav/c++/llvm/tools/llc/llc.cpp
index 4f93a43..fe34bd1 100644
--- a/libclamav/c++/llvm/tools/llc/llc.cpp
+++ b/libclamav/c++/llvm/tools/llc/llc.cpp
@@ -15,16 +15,13 @@
 
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/PassManager.h"
 #include "llvm/Pass.h"
 #include "llvm/ADT/Triple.h"
 #include "llvm/Analysis/Verifier.h"
 #include "llvm/Support/IRReader.h"
-#include "llvm/CodeGen/FileWriters.h"
 #include "llvm/CodeGen/LinkAllAsmWriterComponents.h"
 #include "llvm/CodeGen/LinkAllCodegenComponents.h"
-#include "llvm/CodeGen/ObjectCodeEmitter.h"
 #include "llvm/Config/config.h"
 #include "llvm/LinkAllVMCore.h"
 #include "llvm/Support/CommandLine.h"
@@ -87,16 +84,15 @@ MAttrs("mattr",
   cl::value_desc("a1,+a2,-a3,..."));
 
 cl::opt<TargetMachine::CodeGenFileType>
-FileType("filetype", cl::init(TargetMachine::AssemblyFile),
+FileType("filetype", cl::init(TargetMachine::CGFT_AssemblyFile),
   cl::desc("Choose a file type (not all types are supported by all targets):"),
   cl::values(
-       clEnumValN(TargetMachine::AssemblyFile, "asm",
+       clEnumValN(TargetMachine::CGFT_AssemblyFile, "asm",
                   "Emit an assembly ('.s') file"),
-       clEnumValN(TargetMachine::ObjectFile, "obj",
+       clEnumValN(TargetMachine::CGFT_ObjectFile, "obj",
                   "Emit a native object ('.o') file [experimental]"),
-       clEnumValN(TargetMachine::DynamicLibrary, "dynlib",
-                  "Emit a native dynamic library ('.so') file"
-                  " [experimental]"),
+       clEnumValN(TargetMachine::CGFT_Null, "null",
+                  "Emit nothing, for performance testing"),
        clEnumValEnd));
 
 cl::opt<bool> NoVerify("disable-verify", cl::Hidden,
@@ -164,7 +160,8 @@ static formatted_raw_ostream *GetOutputStream(const char *TargetName,
 
   bool Binary = false;
   switch (FileType) {
-  case TargetMachine::AssemblyFile:
+  default: assert(0 && "Unknown file type");
+  case TargetMachine::CGFT_AssemblyFile:
     if (TargetName[0] == 'c') {
       if (TargetName[1] == 0)
         OutputFilename += ".cbe.c";
@@ -175,12 +172,12 @@ static formatted_raw_ostream *GetOutputStream(const char *TargetName,
     } else
       OutputFilename += ".s";
     break;
-  case TargetMachine::ObjectFile:
+  case TargetMachine::CGFT_ObjectFile:
     OutputFilename += ".o";
     Binary = true;
     break;
-  case TargetMachine::DynamicLibrary:
-    OutputFilename += LTDL_SHLIB_EXT;
+  case TargetMachine::CGFT_Null:
+    OutputFilename += ".null";
     Binary = true;
     break;
   }
@@ -334,8 +331,7 @@ int main(int argc, char **argv) {
     PM.run(mod);
   } else {
     // Build up all of the passes that we want to do to the module.
-    ExistingModuleProvider Provider(M.release());
-    FunctionPassManager Passes(&Provider);
+    FunctionPassManager Passes(M.get());
 
     // Add the target data from the target machine, if it exists, or the module.
     if (const TargetData *TD = Target.getTargetData())
@@ -348,32 +344,10 @@ int main(int argc, char **argv) {
       Passes.add(createVerifierPass());
 #endif
 
-    // Ask the target to add backend passes as necessary.
-    ObjectCodeEmitter *OCE = 0;
-
     // Override default to generate verbose assembly.
     Target.setAsmVerbosityDefault(true);
 
-    switch (Target.addPassesToEmitFile(Passes, *Out, FileType, OLvl)) {
-    default:
-      assert(0 && "Invalid file model!");
-      return 1;
-    case FileModel::Error:
-      errs() << argv[0] << ": target does not support generation of this"
-             << " file type!\n";
-      if (Out != &fouts()) delete Out;
-      // And the Out file is empty and useless, so remove it now.
-      sys::Path(OutputFilename).eraseFromDisk();
-      return 1;
-    case FileModel::AsmFile:
-    case FileModel::MachOFile:
-      break;
-    case FileModel::ElfFile:
-      OCE = AddELFWriter(Passes, *Out, Target);
-      break;
-    }
-
-    if (Target.addPassesToEmitFileFinish(Passes, OCE, OLvl)) {
+    if (Target.addPassesToEmitFile(Passes, *Out, FileType, OLvl)) {
       errs() << argv[0] << ": target does not support generation of this"
              << " file type!\n";
       if (Out != &fouts()) delete Out;
diff --git a/libclamav/c++/llvm/tools/lli/lli.cpp b/libclamav/c++/llvm/tools/lli/lli.cpp
index 218bb93..81c17cd 100644
--- a/libclamav/c++/llvm/tools/lli/lli.cpp
+++ b/libclamav/c++/llvm/tools/lli/lli.cpp
@@ -15,7 +15,6 @@
 
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/Type.h"
 #include "llvm/Bitcode/ReaderWriter.h"
 #include "llvm/CodeGen/LinkAllCodegenComponents.h"
@@ -59,6 +58,22 @@ namespace {
   TargetTriple("mtriple", cl::desc("Override target triple for module"));
 
   cl::opt<std::string>
+  MArch("march",
+        cl::desc("Architecture to generate assembly for (see --version)"));
+
+  cl::opt<std::string>
+  MCPU("mcpu",
+       cl::desc("Target a specific cpu type (-mcpu=help for details)"),
+       cl::value_desc("cpu-name"),
+       cl::init(""));
+
+  cl::list<std::string>
+  MAttrs("mattr",
+         cl::CommaSeparated,
+         cl::desc("Target specific attributes (-mattr=help for details)"),
+         cl::value_desc("a1,+a2,-a3,..."));
+
+  cl::opt<std::string>
   EntryFunc("entry-function",
             cl::desc("Specify the entry function (default = 'main') "
                      "of the executable"),
@@ -110,28 +125,31 @@ int main(int argc, char **argv, char * const *envp) {
   
   // Load the bitcode...
   std::string ErrorMsg;
-  ModuleProvider *MP = NULL;
+  Module *Mod = NULL;
   if (MemoryBuffer *Buffer = MemoryBuffer::getFileOrSTDIN(InputFile,&ErrorMsg)){
-    MP = getBitcodeModuleProvider(Buffer, Context, &ErrorMsg);
-    if (!MP) delete Buffer;
+    Mod = getLazyBitcodeModule(Buffer, Context, &ErrorMsg);
+    if (!Mod) delete Buffer;
   }
   
-  if (!MP) {
+  if (!Mod) {
     errs() << argv[0] << ": error loading program '" << InputFile << "': "
            << ErrorMsg << "\n";
     exit(1);
   }
 
-  // Get the module as the MP could go away once EE takes over.
-  Module *Mod = NoLazyCompilation
-    ? MP->materializeModule(&ErrorMsg) : MP->getModule();
-  if (!Mod) {
-    errs() << argv[0] << ": bitcode didn't read correctly.\n";
-    errs() << "Reason: " << ErrorMsg << "\n";
-    exit(1);
+  // If not jitting lazily, load the whole bitcode file eagerly too.
+  if (NoLazyCompilation) {
+    if (Mod->MaterializeAllPermanently(&ErrorMsg)) {
+      errs() << argv[0] << ": bitcode didn't read correctly.\n";
+      errs() << "Reason: " << ErrorMsg << "\n";
+      exit(1);
+    }
   }
 
-  EngineBuilder builder(MP);
+  EngineBuilder builder(Mod);
+  builder.setMArch(MArch);
+  builder.setMCPU(MCPU);
+  builder.setMAttrs(MAttrs);
   builder.setErrorStr(&ErrorMsg);
   builder.setEngineKind(ForceInterpreter
                         ? EngineKind::Interpreter
diff --git a/libclamav/c++/llvm/tools/llvm-config/Makefile b/libclamav/c++/llvm/tools/llvm-config/Makefile
index e5bdc04..cc5cf43 100644
--- a/libclamav/c++/llvm/tools/llvm-config/Makefile
+++ b/libclamav/c++/llvm/tools/llvm-config/Makefile
@@ -53,6 +53,46 @@ llvm-config.in: $(ConfigInIn) $(ConfigStatusScript)
 	$(Verb) cd $(PROJ_OBJ_ROOT) ; \
 		$(ConfigStatusScript) tools/llvm-config/llvm-config.in
 
+llvm-config-perobj: llvm-config.in $(GenLibDeps) $(LibDir) $(wildcard $(LibDir)/*.a)
+	$(Echo) "Generating llvm-config-perobj"
+	$(Verb) $(PERL) $(GenLibDeps) -perobj -flat $(LibDir) "$(NM_PATH)" >PerobjDeps.txt
+	$(Echo) "Checking for cyclic dependencies between LLVM objects."
+	$(Verb) $(PERL) $(PROJ_SRC_DIR)/find-cycles.pl < PerobjDepsIncl.txt > PerobjDepsInclFinal.txt || rm -f $@
+	$(Verb) $(ECHO) 's/@LLVM_CPPFLAGS@/$(subst /,\/,$(SUB_CPPFLAGS))/' \
+	> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_CFLAGS@/$(subst /,\/,$(SUB_CFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_CXXFLAGS@/$(subst /,\/,$(SUB_CXXFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_LDFLAGS@/$(subst /,\/,$(SUB_LDFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_BUILDMODE@/$(subst /,\/,$(BuildMode))/' \
+	>> temp.sed
+	$(Verb) $(SED) -f temp.sed < $< > $@
+	$(Verb) $(RM) temp.sed
+	$(Verb) cat PerobjDepsFinal.txt >> $@
+	$(Verb) chmod +x $@
+
+llvm-config-perobjincl: llvm-config.in $(GenLibDeps) $(LibDir) $(wildcard $(LibDir)/*.a)
+	$(Echo) "Generating llvm-config-perobjincl"
+	$(Verb) $(PERL) $(GenLibDeps) -perobj -perobjincl -flat $(LibDir) "$(NM_PATH)" >PerobjDepsIncl.txt
+	$(Echo) "Checking for cyclic dependencies between LLVM objects."
+	$(Verb) $(PERL) $(PROJ_SRC_DIR)/find-cycles.pl < PerobjDepsIncl.txt > PerobjDepsInclFinal.txt
+	$(Verb) $(ECHO) 's/@LLVM_CPPFLAGS@/$(subst /,\/,$(SUB_CPPFLAGS))/' \
+	> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_CFLAGS@/$(subst /,\/,$(SUB_CFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_CXXFLAGS@/$(subst /,\/,$(SUB_CXXFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_LDFLAGS@/$(subst /,\/,$(SUB_LDFLAGS))/' \
+	>> temp.sed
+	$(Verb) $(ECHO) 's/@LLVM_BUILDMODE@/$(subst /,\/,$(BuildMode))/' \
+	>> temp.sed
+	$(Verb) $(SED) -f temp.sed < $< > $@
+	$(Verb) $(RM) temp.sed
+	$(Verb) cat PerobjDepsInclFinal.txt >> $@
+	$(Verb) chmod +x $@
+
 # Build our final script.
 $(ToolDir)/llvm-config: llvm-config.in $(FinalLibDeps)
 	$(Echo) "Building llvm-config script."
diff --git a/libclamav/c++/llvm/tools/llvm-config/find-cycles.pl b/libclamav/c++/llvm/tools/llvm-config/find-cycles.pl
index 8156abd..5cbf5b4 100755
--- a/libclamav/c++/llvm/tools/llvm-config/find-cycles.pl
+++ b/libclamav/c++/llvm/tools/llvm-config/find-cycles.pl
@@ -62,6 +62,11 @@ foreach my $cycle (@CYCLES) {
         print STDERR "find-cycles.pl: Circular dependency between *.a files:\n";
         print STDERR "find-cycles.pl:   ", join(' ', @archives), "\n";
         push @modules, @archives; # WORKAROUND: Duplicate *.a files. Ick.
+    } elsif (@modules > 1) {
+        $cycles_found = $cycles_found + 1;
+        print STDERR "find-cycles.pl: Circular dependency between *.o files:\n";
+        print STDERR "find-cycles.pl:   ", join(' ', @modules), "\n";
+        push @modules, @modules; # WORKAROUND: Duplicate *.o files. Ick.
     }
 
     # Add to our output.  (@modules is already as sorted as we need it to be.)
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp b/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
index 5d50f9d..e66e2f9 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/driver/Main.cpp
@@ -37,11 +37,15 @@ int main(int argc, char** argv) {
   DryRun.setHiddenFlag(llvm::cl::Hidden);
 
   llvm::cl::SetVersionPrinter(PIC16VersionPrinter); 
-  
-  TempDirname = "tmp-objs";
 
-  // Remove the temp dir if already exists.
+  // Ask for a standard temp dir, but just cache its basename., and delete it.
   llvm::sys::Path tempDir;
+  tempDir = llvm::sys::Path::GetTemporaryDirectory();
+  TempDirname = tempDir.getBasename();
+  tempDir.eraseFromDisk(true);
+
+  // We are creating a temp dir in current dir, with the cached name.
+  //  But before that remove if one already exists with that name..
   tempDir = TempDirname;
   tempDir.eraseFromDisk(true);
 
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
index 717e95e..f13b9f8 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PIC16Base.td
@@ -47,15 +47,24 @@ def OptionList : OptionList<[
     (help "Optimization Level 3.")),
  (switch_option "Od",
     (help "Perform Debug-safe Optimizations only.")),
- (switch_option "r",
-    (help "Use resource file for part info"),
-    (really_hidden)),
+ (switch_option "w",
+    (help "Disable all warnings.")),
+// (switch_option "O1",
+//    (help "Optimization level 1")),
+// (switch_option "O2",
+//    (help "Optimization level 2. (Default)")),
+// (parameter_option "pre-RA-sched",
+//    (help "Example of an option that is passed to llc")),
  (parameter_option "regalloc",
-    (help "Register allocator to use.(possible values: simple, linearscan, pbqp, local. default = pbqp)")),
- (prefix_list_option "Wa,",
+    (help "Register allocator to use.(possible values: simple, linearscan, pbqp, local. default = linearscan)")),
+ (prefix_list_option "Wa,", (comma_separated),
     (help "Pass options to assembler (Run 'gpasm -help' for assembler options)")),
- (prefix_list_option "Wl,",
+ (prefix_list_option "Wl,", (comma_separated),
     (help "Pass options to linker (Run 'mplink -help' for linker options)"))
+// (prefix_list_option "Wllc,",
+//    (help "Pass options to llc")),
+// (prefix_list_option "Wo,",
+//    (help "Pass options to llvm-ld"))
 ]>;
 
 // Tools
@@ -75,6 +84,7 @@ class clang_based<string language, string cmd, string ext_E> : Tool<
                 (switch_on "E"), [(stop_compilation), (output_suffix ext_E)],
                 (switch_on "bc"),[(stop_compilation), (output_suffix "bc")],
                 (switch_on "g"), (append_cmd "-g"),
+                (switch_on "w"), (append_cmd "-w"),
                 (switch_on "O1"), (append_cmd ""),
                 (switch_on "O2"), (append_cmd ""),
                 (switch_on "O3"), (append_cmd ""),
@@ -83,9 +93,22 @@ class clang_based<string language, string cmd, string ext_E> : Tool<
                 (not_empty "I"), (forward "I"),
                 (switch_on "O0"), (append_cmd "-O0"),
                 (default), (append_cmd "-O1")))
+// (sink)
 ]>;
 
-def clang_cc : clang_based<"c", "$CALL(GetBinDir)clang -cc1                                                    -I $CALL(GetStdHeadersDir) -triple=pic16-                                       -emit-llvm-bc ", "i">;
+def clang_cc : clang_based<"c", "$CALL(GetBinDir)clang -cc1                                                        -I $CALL(GetStdHeadersDir)                                                      -D $CALL(GetLowerCasePartDefine)                                                -D $CALL(GetUpperCasePartDefine) -triple=pic16-                                 -emit-llvm-bc ", "i">;
+
+//def clang_cc : Tool<[
+// (in_language "c"),
+// (out_language "llvm-bitcode"),
+// (output_suffix "bc"),
+// (cmd_line "$CALL(GetBinDir)clang-cc -I $CALL(GetStdHeadersDir) -triple=pic16- -emit-llvm-bc "),
+// (cmd_line kkkkk
+// (actions (case
+//          (switch_on "g"), (append_cmd "g"),
+//          (not_empty "I"), (forward "I"))),
+// (sink)
+//]>;
 
 
 // pre-link-and-lto step.
@@ -93,12 +116,12 @@ def llvm_ld : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -instcombine -disable-licm-promotion $INFILE -b $OUTFILE -l std"),
+ (cmd_line "$CALL(GetBinDir)llvm-ld -L $CALL(GetStdLibsDir) -disable-gvn -disable-licm-promotion -disable-mem2reg $INFILE -b $OUTFILE -l std"),
  (actions (case
           (switch_on "O0"), (append_cmd "-disable-opt"),
           (switch_on "O1"), (append_cmd "-disable-opt"),
-          (switch_on "O2"), (append_cmd ""), 
 // Whenever O3 is not specified on the command line, default i.e. disable-inlining will always be added.
+          (switch_on "O2"), (append_cmd ""), 
           (switch_on "O3"), (append_cmd ""),
           (default), (append_cmd "-disable-inlining"))),
  (join)
@@ -109,9 +132,16 @@ def llvm_ld_optimizer : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "bc"),
- (cmd_line "$CALL(GetBinDir)llvm-ld -instcombine -disable-inlining                   $INFILE -b $OUTFILE"),
+// FIXME: we are still not disabling licm-promotion.
+// -disable-licm-promotion and building stdn library causes c16-71 to fail.
+ (cmd_line "$CALL(GetBinDir)llvm-ld -disable-gvn -disable-mem2reg                              $INFILE -b $OUTFILE"),
  (actions (case
-          (switch_on "O0"), (append_cmd "-disable-opt")))
+          (switch_on "O0"), (append_cmd "-disable-opt"),
+          (switch_on "O1"), (append_cmd "-disable-opt"),
+// Whenever O3 is not specified on the command line, default i.e. disable-inlining will always be added.
+          (switch_on "O2"), (append_cmd ""), 
+          (switch_on "O3"), (append_cmd ""),
+          (default), (append_cmd "-disable-inlining")))
 ]>;
 
 // optimizer step.
@@ -119,7 +149,7 @@ def pic16passes : Tool<[
  (in_language "llvm-bitcode"),
  (out_language "llvm-bitcode"),
  (output_suffix "obc"),
- (cmd_line "$CALL(GetBinDir)opt -pic16overlay $INFILE -f -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)opt -pic16cg -pic16overlay $INFILE -f -o $OUTFILE"),
  (actions (case
           (switch_on "O0"), (append_cmd "-disable-opt")))
 ]>;
@@ -131,19 +161,20 @@ def llc : Tool<[
  (cmd_line "$CALL(GetBinDir)llc -march=pic16 -disable-jump-tables -pre-RA-sched=list-burr -f $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "S"), (stop_compilation),
+//          (not_empty "Wllc,"), (unpack_values "Wllc,"),
+//         (not_empty "pre-RA-sched"), (forward "pre-RA-sched")))
          (not_empty "regalloc"), (forward "regalloc"),
-         (empty "regalloc"), (append_cmd "-regalloc=pbqp")))
+         (empty "regalloc"), (append_cmd "-regalloc=linearscan")))
 ]>;
 
 def gpasm : Tool<[
  (in_language "assembler"),
  (out_language "object-code"),
  (output_suffix "o"),
- (cmd_line "$CALL(GetBinDir)gpasm -r decimal -I $CALL(GetStdAsmHeadersDir) -C -c -w 2 $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)gpasm -z -r decimal -I $CALL(GetStdAsmHeadersDir) -C -c -w 2 $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "c"), (stop_compilation),
           (switch_on "g"), (append_cmd "-g"),
-          (switch_on "r"), (append_cmd "-z"),
           (not_empty "p"), (forward "p"),
           (empty "p"), (append_cmd "-p 16f1xxx"),
           (not_empty "Wa,"), (forward_value "Wa,")))
@@ -153,18 +184,18 @@ def mplink : Tool<[
  (in_language "object-code"),
  (out_language "executable"),
  (output_suffix "cof"),
- (cmd_line "$CALL(GetBinDir)mplink -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) intrinsics.lib stdn.lib $INFILE -o $OUTFILE"),
+ (cmd_line "$CALL(GetBinDir)mplink -e -k $CALL(GetStdLinkerScriptsDir) -l $CALL(GetStdLibsDir) intrinsics.lib stdn.lib $INFILE -o $OUTFILE"),
  (actions (case
           (not_empty "Wl,"), (forward_value "Wl,"),
-          (switch_on "r"), (append_cmd "-e"),
           (switch_on "X"), (append_cmd "-x"),
           (not_empty "L"), (forward_as "L", "-l"),
           (not_empty "K"), (forward_as "K", "-k"),
           (not_empty "m"), (forward "m"),
           (not_empty "p"), [(forward "p"), (append_cmd "-c")],
           (empty "p"), (append_cmd "-p 16f1xxx -c"),
-          (not_empty "k"), (forward_value "k"),
-          (not_empty "l"), (forward_value "l"))),
+//          (not_empty "l"), [(unpack_values "l"),(append_cmd ".lib")])),
+          (not_empty "k"), (forward "k"),
+          (not_empty "l"), (forward "l"))),
  (join)
 ]>;
 
diff --git a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
index a6d2ff6..9b2f9fc 100644
--- a/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
+++ b/libclamav/c++/llvm/tools/llvmc/example/mcc16/plugins/PIC16Base/PluginMain.cpp
@@ -9,6 +9,8 @@ namespace llvmc {
   extern char *ProgramName;
 }
 
+  
+
 // Returns the platform specific directory separator via #ifdefs.
 // FIXME: This currently work on linux and windows only. It does not 
 // work on other unices. 
@@ -21,6 +23,43 @@ static std::string GetDirSeparator() {
 }
 
 namespace hooks {
+// Get preprocessor define for the part.
+// It is __partname format in lower case.
+std::string
+GetLowerCasePartDefine(void) {
+  std::string Partname;
+  if (AutoGeneratedParameter_p.empty()) {
+    Partname = "16f1xxx";
+  } else {
+    Partname = AutoGeneratedParameter_p;
+  }
+
+  std::string LowerCase;
+  for (unsigned i = 0; i <= Partname.size(); i++) {
+    LowerCase.push_back(std::tolower(Partname[i]));
+  }
+
+  return "__" + LowerCase;
+}
+
+std::string
+GetUpperCasePartDefine(void) {
+  std::string Partname;
+  if (AutoGeneratedParameter_p.empty()) {
+    Partname = "16f1xxx";
+  } else {
+    Partname = AutoGeneratedParameter_p;
+  }
+
+  std::string UpperCase;
+  for (unsigned i = 0; i <= Partname.size(); i++) {
+    UpperCase.push_back(std::toupper(Partname[i]));
+  }
+
+  return "__" +  UpperCase;
+}
+
+
 // Get the dir where c16 executables reside.
 std::string GetBinDir() {
   // Construct a Path object from the program name.  
diff --git a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
index cf0ff68..1acd969 100644
--- a/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
+++ b/libclamav/c++/llvm/tools/llvmc/plugins/Base/Base.td.in
@@ -50,10 +50,18 @@ def OptList : OptionList<[
     (help "Choose linker (possible values: gcc, g++)")),
  (parameter_option "mtune",
     (help "Target a specific CPU type"), (hidden)),
+
+ // TODO: Add a conditional compilation mechanism to make Darwin-only options
+ // like '-arch' really Darwin-only.
+
+ (parameter_option "arch",
+    (help "Compile for the specified target architecture"), (hidden)),
  (parameter_option "march",
     (help "A synonym for -mtune"), (hidden)),
  (parameter_option "mcpu",
     (help "A deprecated synonym for -mtune"), (hidden)),
+ (switch_option "mfix-and-continue",
+    (help "Needed by gdb to load .o files dynamically"), (hidden)),
  (parameter_option "MF",
     (help "Specify a file to write dependencies to"), (hidden)),
  (parameter_list_option "MT",
@@ -61,6 +69,9 @@ def OptList : OptionList<[
     (hidden)),
  (parameter_list_option "include",
     (help "Include the named file prior to preprocessing")),
+ (parameter_list_option "iquote",
+    (help "Search dir only for files requested with #inlcude \"file\""),
+    (hidden)),
  (parameter_list_option "framework",
     (help "Specifies a framework to link against")),
  (parameter_list_option "weak_framework",
@@ -85,7 +96,19 @@ def OptList : OptionList<[
     (help "Pass options to opt")),
  (prefix_list_option "m",
      (help "Enable or disable various extensions (-mmmx, -msse, etc.)"),
-     (hidden))
+     (hidden)),
+ (switch_option "dynamiclib", (hidden),
+     (help "Produce a dynamic library")),
+ (switch_option "prebind", (hidden),
+     (help "Prebind all undefined symbols")),
+ (switch_option "dead_strip", (hidden),
+     (help "Remove unreachable blocks of code")),
+ (switch_option "single_module", (hidden),
+     (help "Build the library so it contains only one module")),
+ (parameter_option "compatibility_version", (hidden),
+     (help "Compatibility version number")),
+ (parameter_option "current_version", (hidden),
+     (help "Current version number"))
 ]>;
 
 // Option preprocessor.
@@ -129,14 +152,17 @@ class llvm_gcc_based <string cmd_prefix, string in_lang, string E_ext> : Tool<
          (switch_on ["emit-llvm", "c"]), (stop_compilation),
          (switch_on "fsyntax-only"), (stop_compilation),
          (not_empty "include"), (forward "include"),
+         (not_empty "iquote"), (forward "iquote"),
          (not_empty "save-temps"), (append_cmd "-save-temps"),
          (not_empty "I"), (forward "I"),
          (not_empty "F"), (forward "F"),
          (not_empty "D"), (forward "D"),
+         (not_empty "arch"), (forward "arch"),
          (not_empty "march"), (forward "march"),
          (not_empty "mtune"), (forward "mtune"),
          (not_empty "mcpu"), (forward "mcpu"),
          (not_empty "m"), (forward "m"),
+         (switch_on "mfix-and-continue"), (forward "mfix-and-continue"),
          (switch_on "m32"), (forward "m32"),
          (switch_on "m64"), (forward "m64"),
          (switch_on "O0"), (forward "O0"),
@@ -183,6 +209,7 @@ def llvm_gcc_assembler : Tool<
  (cmd_line "@LLVMGCCCOMMAND@ -c -x assembler $INFILE -o $OUTFILE"),
  (actions (case
           (switch_on "c"), (stop_compilation),
+          (not_empty "arch"), (forward "arch"),
           (not_empty "Wa,"), (forward_value "Wa,")))
 ]>;
 
@@ -218,12 +245,21 @@ class llvm_gcc_based_linker <string cmd_prefix> : Tool<
           (switch_on "pthread"), (append_cmd "-lpthread"),
           (not_empty "L"), (forward "L"),
           (not_empty "F"), (forward "F"),
+          (not_empty "arch"), (forward "arch"),
           (not_empty "framework"), (forward "framework"),
           (not_empty "weak_framework"), (forward "weak_framework"),
           (switch_on "m32"), (forward "m32"),
           (switch_on "m64"), (forward "m64"),
           (not_empty "l"), (forward "l"),
-          (not_empty "Wl,"), (forward "Wl,")))
+          (not_empty "Wl,"), (forward "Wl,"),
+          (switch_on "dynamiclib"), (forward "dynamiclib"),
+          (switch_on "prebind"), (forward "prebind"),
+          (switch_on "dead_strip"), (forward "dead_strip"),
+          (switch_on "single_module"), (forward "single_module"),
+          (not_empty "compatibility_version"),
+                     (forward "compatibility_version"),
+          (not_empty "current_version"),
+                     (forward "current_version")))
 ]>;
 
 // Default linker
diff --git a/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp b/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
index 76cdafc..b02cc3e 100644
--- a/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/APFloatTest.cpp
@@ -333,6 +333,8 @@ TEST(APFloatTest, toString) {
   ASSERT_EQ("1.01E-2", convertToString(1.01E-2, 5, 1));
   ASSERT_EQ("0.7853981633974483", convertToString(0.78539816339744830961, 0, 3));
   ASSERT_EQ("4.940656458412465E-324", convertToString(4.9406564584124654e-324, 0, 3));
+  ASSERT_EQ("873.1834", convertToString(873.1834, 0, 1));
+  ASSERT_EQ("8.731834E+2", convertToString(873.1834, 0, 0));
 }
 
 #ifdef GTEST_HAS_DEATH_TEST
diff --git a/libclamav/c++/llvm/unittests/ADT/BitVectorTest.cpp b/libclamav/c++/llvm/unittests/ADT/BitVectorTest.cpp
index 5348281..4fe11c1 100644
--- a/libclamav/c++/llvm/unittests/ADT/BitVectorTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/BitVectorTest.cpp
@@ -7,6 +7,7 @@
 //
 //===----------------------------------------------------------------------===//
 
+#ifndef XFAIL
 #include "llvm/ADT/BitVector.h"
 #include "gtest/gtest.h"
 
@@ -137,4 +138,45 @@ TEST(BitVectorTest, TrivialOperation) {
   EXPECT_TRUE(Vec.empty());
 }
 
+TEST(BitVectorTest, CompoundAssignment) {
+  BitVector A;
+  A.resize(10);
+  A.set(4);
+  A.set(7);
+
+  BitVector B;
+  B.resize(50);
+  B.set(5);
+  B.set(18);
+
+  A |= B;
+  EXPECT_TRUE(A.test(4));
+  EXPECT_TRUE(A.test(5));
+  EXPECT_TRUE(A.test(7));
+  EXPECT_TRUE(A.test(18));
+  EXPECT_EQ(4U, A.count());
+  EXPECT_EQ(50U, A.size());
+
+  B.resize(10);
+  B.set();
+  B.reset(2);
+  B.reset(7);
+  A &= B;
+  EXPECT_FALSE(A.test(2));
+  EXPECT_FALSE(A.test(7));
+  EXPECT_EQ(2U, A.count());
+  EXPECT_EQ(50U, A.size());
+
+  B.resize(100);
+  B.set();
+
+  A ^= B;
+  EXPECT_TRUE(A.test(2));
+  EXPECT_TRUE(A.test(7));
+  EXPECT_EQ(98U, A.count());
+  EXPECT_EQ(100U, A.size());
 }
+
+}
+
+#endif
diff --git a/libclamav/c++/llvm/unittests/ADT/Makefile b/libclamav/c++/llvm/unittests/ADT/Makefile
index c56b951..fe08328 100644
--- a/libclamav/c++/llvm/unittests/ADT/Makefile
+++ b/libclamav/c++/llvm/unittests/ADT/Makefile
@@ -12,4 +12,12 @@ TESTNAME = ADT
 LINK_COMPONENTS := core support
 
 include $(LEVEL)/Makefile.config
+
+# Xfail BitVectorTest for now on PPC Darwin.  7598360.
+ifeq ($(ARCH),PowerPC)
+ifeq ($(TARGET_OS),Darwin)
+CPP.Flags += -DXFAIL
+endif
+endif
+
 include $(LLVM_SRC_ROOT)/unittests/Makefile.unittest
diff --git a/libclamav/c++/llvm/unittests/ADT/SmallBitVectorTest.cpp b/libclamav/c++/llvm/unittests/ADT/SmallBitVectorTest.cpp
index a5c60de..a2cc652 100644
--- a/libclamav/c++/llvm/unittests/ADT/SmallBitVectorTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/SmallBitVectorTest.cpp
@@ -137,4 +137,43 @@ TEST(SmallBitVectorTest, TrivialOperation) {
   EXPECT_TRUE(Vec.empty());
 }
 
+TEST(SmallBitVectorTest, CompoundAssignment) {
+  SmallBitVector A;
+  A.resize(10);
+  A.set(4);
+  A.set(7);
+
+  SmallBitVector B;
+  B.resize(50);
+  B.set(5);
+  B.set(18);
+
+  A |= B;
+  EXPECT_TRUE(A.test(4));
+  EXPECT_TRUE(A.test(5));
+  EXPECT_TRUE(A.test(7));
+  EXPECT_TRUE(A.test(18));
+  EXPECT_EQ(4U, A.count());
+  EXPECT_EQ(50U, A.size());
+
+  B.resize(10);
+  B.set();
+  B.reset(2);
+  B.reset(7);
+  A &= B;
+  EXPECT_FALSE(A.test(2));
+  EXPECT_FALSE(A.test(7));
+  EXPECT_EQ(2U, A.count());
+  EXPECT_EQ(50U, A.size());
+
+  B.resize(100);
+  B.set();
+
+  A ^= B;
+  EXPECT_TRUE(A.test(2));
+  EXPECT_TRUE(A.test(7));
+  EXPECT_EQ(98U, A.count());
+  EXPECT_EQ(100U, A.size());
+}
+
 }
diff --git a/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp b/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
index 3dcdc39..413f068 100644
--- a/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
+++ b/libclamav/c++/llvm/unittests/ADT/StringMapTest.cpp
@@ -191,6 +191,7 @@ TEST_F(StringMapTest, StringMapEntryTest) {
           testKeyFirst, testKeyFirst + testKeyLength, 1u);
   EXPECT_STREQ(testKey, entry->first());
   EXPECT_EQ(1u, entry->second);
+  free(entry);
 }
 
 // Test insert() method.
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
index c3bb858..a36ec3b 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITEventListenerTest.cpp
@@ -12,7 +12,6 @@
 #include "llvm/LLVMContext.h"
 #include "llvm/Instructions.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/ADT/OwningPtr.h"
 #include "llvm/CodeGen/MachineCodeInfo.h"
 #include "llvm/ExecutionEngine/JIT.h"
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
index 56abb1b..53f5b38 100644
--- a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/JITTest.cpp
@@ -23,7 +23,6 @@
 #include "llvm/GlobalVariable.h"
 #include "llvm/LLVMContext.h"
 #include "llvm/Module.h"
-#include "llvm/ModuleProvider.h"
 #include "llvm/Support/IRBuilder.h"
 #include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/SourceMgr.h"
@@ -194,11 +193,10 @@ class JITTest : public testing::Test {
  protected:
   virtual void SetUp() {
     M = new Module("<main>", Context);
-    MP = new ExistingModuleProvider(M);
     RJMM = new RecordingJITMemoryManager;
     RJMM->setPoisonMemory(true);
     std::string Error;
-    TheJIT.reset(EngineBuilder(MP).setEngineKind(EngineKind::JIT)
+    TheJIT.reset(EngineBuilder(M).setEngineKind(EngineKind::JIT)
                  .setJITMemoryManager(RJMM)
                  .setErrorStr(&Error).create());
     ASSERT_TRUE(TheJIT.get() != NULL) << Error;
@@ -209,8 +207,7 @@ class JITTest : public testing::Test {
   }
 
   LLVMContext Context;
-  Module *M;  // Owned by MP.
-  ModuleProvider *MP;  // Owned by ExecutionEngine.
+  Module *M;  // Owned by ExecutionEngine.
   RecordingJITMemoryManager *RJMM;
   OwningPtr<ExecutionEngine> TheJIT;
 };
@@ -223,14 +220,13 @@ class JITTest : public testing::Test {
 TEST(JIT, GlobalInFunction) {
   LLVMContext context;
   Module *M = new Module("<main>", context);
-  ExistingModuleProvider *MP = new ExistingModuleProvider(M);
 
   JITMemoryManager *MemMgr = JITMemoryManager::CreateDefaultMemManager();
   // Tell the memory manager to poison freed memory so that accessing freed
   // memory is more easily tested.
   MemMgr->setPoisonMemory(true);
   std::string Error;
-  OwningPtr<ExecutionEngine> JIT(EngineBuilder(MP)
+  OwningPtr<ExecutionEngine> JIT(EngineBuilder(M)
                                  .setEngineKind(EngineKind::JIT)
                                  .setErrorStr(&Error)
                                  .setJITMemoryManager(MemMgr)
@@ -428,7 +424,8 @@ TEST_F(JITTest, ModuleDeletion) {
                "} ");
   Function *func = M->getFunction("main");
   TheJIT->getPointerToFunction(func);
-  TheJIT->deleteModuleProvider(MP);
+  TheJIT->removeModule(M);
+  delete M;
 
   SmallPtrSet<const void*, 2> FunctionsDeallocated;
   for (unsigned i = 0, e = RJMM->deallocateFunctionBodyCalls.size();
@@ -649,36 +646,70 @@ std::string AssembleToBitcode(LLVMContext &Context, const char *Assembly) {
 }
 
 // Returns a newly-created ExecutionEngine that reads the bitcode in 'Bitcode'
-// lazily.  The associated ModuleProvider (owned by the ExecutionEngine) is
-// returned in MP.  Both will be NULL on an error.  Bitcode must live at least
-// as long as the ExecutionEngine.
+// lazily.  The associated Module (owned by the ExecutionEngine) is returned in
+// M.  Both will be NULL on an error.  Bitcode must live at least as long as the
+// ExecutionEngine.
 ExecutionEngine *getJITFromBitcode(
-  LLVMContext &Context, const std::string &Bitcode, ModuleProvider *&MP) {
+  LLVMContext &Context, const std::string &Bitcode, Module *&M) {
   // c_str() is null-terminated like MemoryBuffer::getMemBuffer requires.
   MemoryBuffer *BitcodeBuffer =
     MemoryBuffer::getMemBuffer(Bitcode.c_str(),
                                Bitcode.c_str() + Bitcode.size(),
                                "Bitcode for test");
   std::string errMsg;
-  MP = getBitcodeModuleProvider(BitcodeBuffer, Context, &errMsg);
-  if (MP == NULL) {
+  M = getLazyBitcodeModule(BitcodeBuffer, Context, &errMsg);
+  if (M == NULL) {
     ADD_FAILURE() << errMsg;
     delete BitcodeBuffer;
     return NULL;
   }
-  ExecutionEngine *TheJIT = EngineBuilder(MP)
+  ExecutionEngine *TheJIT = EngineBuilder(M)
     .setEngineKind(EngineKind::JIT)
     .setErrorStr(&errMsg)
     .create();
   if (TheJIT == NULL) {
     ADD_FAILURE() << errMsg;
-    delete MP;
-    MP = NULL;
+    delete M;
+    M = NULL;
     return NULL;
   }
   return TheJIT;
 }
 
+TEST(LazyLoadedJITTest, MaterializableAvailableExternallyFunctionIsntCompiled) {
+  LLVMContext Context;
+  const std::string Bitcode =
+    AssembleToBitcode(Context,
+                      "define available_externally i32 "
+                      "    @JITTest_AvailableExternallyFunction() { "
+                      "  ret i32 7 "
+                      "} "
+                      " "
+                      "define i32 @func() { "
+                      "  %result = tail call i32 "
+                      "    @JITTest_AvailableExternallyFunction() "
+                      "  ret i32 %result "
+                      "} ");
+  ASSERT_FALSE(Bitcode.empty()) << "Assembling failed";
+  Module *M;
+  OwningPtr<ExecutionEngine> TheJIT(getJITFromBitcode(Context, Bitcode, M));
+  ASSERT_TRUE(TheJIT.get()) << "Failed to create JIT.";
+  TheJIT->DisableLazyCompilation(true);
+
+  Function *funcIR = M->getFunction("func");
+  Function *availableFunctionIR =
+    M->getFunction("JITTest_AvailableExternallyFunction");
+
+  // Double-check that the available_externally function is still unmaterialized
+  // when getPointerToFunction needs to find out if it's available_externally.
+  EXPECT_TRUE(availableFunctionIR->isMaterializable());
+
+  int32_t (*func)() = reinterpret_cast<int32_t(*)()>(
+    (intptr_t)TheJIT->getPointerToFunction(funcIR));
+  EXPECT_EQ(42, func()) << "func should return 42 from the static version,"
+                        << " not 7 from the IR version.";
+}
+
 TEST(LazyLoadedJITTest, EagerCompiledRecursionThroughGhost) {
   LLVMContext Context;
   const std::string Bitcode =
@@ -699,16 +730,15 @@ TEST(LazyLoadedJITTest, EagerCompiledRecursionThroughGhost) {
                       "  ret i32 %result "
                       "} ");
   ASSERT_FALSE(Bitcode.empty()) << "Assembling failed";
-  ModuleProvider *MP;
-  OwningPtr<ExecutionEngine> TheJIT(getJITFromBitcode(Context, Bitcode, MP));
+  Module *M;
+  OwningPtr<ExecutionEngine> TheJIT(getJITFromBitcode(Context, Bitcode, M));
   ASSERT_TRUE(TheJIT.get()) << "Failed to create JIT.";
   TheJIT->DisableLazyCompilation(true);
 
-  Module *M = MP->getModule();
   Function *recur1IR = M->getFunction("recur1");
   Function *recur2IR = M->getFunction("recur2");
-  EXPECT_TRUE(recur1IR->hasNotBeenReadFromBitcode());
-  EXPECT_TRUE(recur2IR->hasNotBeenReadFromBitcode());
+  EXPECT_TRUE(recur1IR->isMaterializable());
+  EXPECT_TRUE(recur2IR->isMaterializable());
 
   int32_t (*recur1)(int32_t) = reinterpret_cast<int32_t(*)(int32_t)>(
     (intptr_t)TheJIT->getPointerToFunction(recur1IR));
diff --git a/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/MultiJITTest.cpp b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/MultiJITTest.cpp
new file mode 100644
index 0000000..8997d39
--- /dev/null
+++ b/libclamav/c++/llvm/unittests/ExecutionEngine/JIT/MultiJITTest.cpp
@@ -0,0 +1,164 @@
+//===- MultiJITTest.cpp - Unit tests for instantiating multiple JITs ------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "gtest/gtest.h"
+#include "llvm/LLVMContext.h"
+#include "llvm/Module.h"
+#include "llvm/Assembly/Parser.h"
+#include "llvm/ExecutionEngine/GenericValue.h"
+#include "llvm/ExecutionEngine/JIT.h"
+#include "llvm/Support/SourceMgr.h"
+#include <vector>
+
+using namespace llvm;
+
+namespace {
+
+bool LoadAssemblyInto(Module *M, const char *assembly) {
+  SMDiagnostic Error;
+  bool success =
+    NULL != ParseAssemblyString(assembly, M, Error, M->getContext());
+  std::string errMsg;
+  raw_string_ostream os(errMsg);
+  Error.Print("", os);
+  EXPECT_TRUE(success) << os.str();
+  return success;
+}
+
+void createModule1(LLVMContext &Context1, Module *&M1, Function *&FooF1) {
+  M1 = new Module("test1", Context1);
+  LoadAssemblyInto(M1,
+                   "define i32 @add1(i32 %ArgX1) { "
+                   "entry: "
+                   "  %addresult = add i32 1, %ArgX1 "
+                   "  ret i32 %addresult "
+                   "} "
+                   " "
+                   "define i32 @foo1() { "
+                   "entry: "
+                   "  %add1 = call i32 @add1(i32 10) "
+                   "  ret i32 %add1 "
+                   "} ");
+  FooF1 = M1->getFunction("foo1");
+}
+
+void createModule2(LLVMContext &Context2, Module *&M2, Function *&FooF2) {
+  M2 = new Module("test2", Context2);
+  LoadAssemblyInto(M2,
+                   "define i32 @add2(i32 %ArgX2) { "
+                   "entry: "
+                   "  %addresult = add i32 2, %ArgX2 "
+                   "  ret i32 %addresult "
+                   "} "
+                   " "
+                   "define i32 @foo2() { "
+                   "entry: "
+                   "  %add2 = call i32 @add2(i32 10) "
+                   "  ret i32 %add2 "
+                   "} ");
+  FooF2 = M2->getFunction("foo2");
+}
+
+TEST(MultiJitTest, EagerMode) {
+  LLVMContext Context1;
+  Module *M1 = 0;
+  Function *FooF1 = 0;
+  createModule1(Context1, M1, FooF1);
+
+  LLVMContext Context2;
+  Module *M2 = 0;
+  Function *FooF2 = 0;
+  createModule2(Context2, M2, FooF2);
+
+  // Now we create the JIT in eager mode
+  OwningPtr<ExecutionEngine> EE1(EngineBuilder(M1).create());
+  EE1->DisableLazyCompilation(true);
+  OwningPtr<ExecutionEngine> EE2(EngineBuilder(M2).create());
+  EE2->DisableLazyCompilation(true);
+
+  // Call the `foo' function with no arguments:
+  std::vector<GenericValue> noargs;
+  GenericValue gv1 = EE1->runFunction(FooF1, noargs);
+  GenericValue gv2 = EE2->runFunction(FooF2, noargs);
+
+  // Import result of execution:
+  EXPECT_EQ(gv1.IntVal, 11);
+  EXPECT_EQ(gv2.IntVal, 12);
+
+  EE1->freeMachineCodeForFunction(FooF1);
+  EE2->freeMachineCodeForFunction(FooF2);
+}
+
+TEST(MultiJitTest, LazyMode) {
+  LLVMContext Context1;
+  Module *M1 = 0;
+  Function *FooF1 = 0;
+  createModule1(Context1, M1, FooF1);
+
+  LLVMContext Context2;
+  Module *M2 = 0;
+  Function *FooF2 = 0;
+  createModule2(Context2, M2, FooF2);
+
+  // Now we create the JIT in lazy mode
+  OwningPtr<ExecutionEngine> EE1(EngineBuilder(M1).create());
+  EE1->DisableLazyCompilation(false);
+  OwningPtr<ExecutionEngine> EE2(EngineBuilder(M2).create());
+  EE2->DisableLazyCompilation(false);
+
+  // Call the `foo' function with no arguments:
+  std::vector<GenericValue> noargs;
+  GenericValue gv1 = EE1->runFunction(FooF1, noargs);
+  GenericValue gv2 = EE2->runFunction(FooF2, noargs);
+
+  // Import result of execution:
+  EXPECT_EQ(gv1.IntVal, 11);
+  EXPECT_EQ(gv2.IntVal, 12);
+
+  EE1->freeMachineCodeForFunction(FooF1);
+  EE2->freeMachineCodeForFunction(FooF2);
+}
+
+extern "C" {
+  extern void *getPointerToNamedFunction(const char *Name);
+}
+
+TEST(MultiJitTest, JitPool) {
+  LLVMContext Context1;
+  Module *M1 = 0;
+  Function *FooF1 = 0;
+  createModule1(Context1, M1, FooF1);
+
+  LLVMContext Context2;
+  Module *M2 = 0;
+  Function *FooF2 = 0;
+  createModule2(Context2, M2, FooF2);
+
+  // Now we create two JITs
+  OwningPtr<ExecutionEngine> EE1(EngineBuilder(M1).create());
+  OwningPtr<ExecutionEngine> EE2(EngineBuilder(M2).create());
+
+  Function *F1 = EE1->FindFunctionNamed("foo1");
+  void *foo1 = EE1->getPointerToFunction(F1);
+
+  Function *F2 = EE2->FindFunctionNamed("foo2");
+  void *foo2 = EE2->getPointerToFunction(F2);
+
+  // Function in M1
+  EXPECT_EQ(getPointerToNamedFunction("foo1"), foo1);
+
+  // Function in M2
+  EXPECT_EQ(getPointerToNamedFunction("foo2"), foo2);
+
+  // Symbol search
+  EXPECT_EQ((intptr_t)getPointerToNamedFunction("getPointerToNamedFunction"),
+            (intptr_t)&getPointerToNamedFunction);
+}
+
+}  // anonymous namespace
diff --git a/libclamav/c++/llvm/unittests/Makefile.unittest b/libclamav/c++/llvm/unittests/Makefile.unittest
index 65638b2..6fbef54 100644
--- a/libclamav/c++/llvm/unittests/Makefile.unittest
+++ b/libclamav/c++/llvm/unittests/Makefile.unittest
@@ -14,11 +14,11 @@
 # Set up variables for building a unit test.
 ifdef TESTNAME
 
-CXXFLAGS += -DGTEST_HAS_RTTI=0
+CPP.Flags += -DGTEST_HAS_RTTI=0
 # gcc's TR1 <tuple> header depends on RTTI, so force googletest to use
 # its own tuple implementation.  When we import googletest >=1.4.0, we
 # can drop this line.
-CXXFLAGS += -DGTEST_HAS_TR1_TUPLE=0
+CPP.Flags += -DGTEST_HAS_TR1_TUPLE=0
 
 include $(LEVEL)/Makefile.common
 
diff --git a/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp b/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
index a5c5e67..e805827 100644
--- a/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
+++ b/libclamav/c++/llvm/unittests/Support/TypeBuilderTest.cpp
@@ -19,9 +19,16 @@ namespace {
 TEST(TypeBuilderTest, Void) {
   EXPECT_EQ(Type::getVoidTy(getGlobalContext()), (TypeBuilder<void, true>::get(getGlobalContext())));
   EXPECT_EQ(Type::getVoidTy(getGlobalContext()), (TypeBuilder<void, false>::get(getGlobalContext())));
-  // Special case for C compatibility:
+  // Special cases for C compatibility:
   EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
             (TypeBuilder<void*, false>::get(getGlobalContext())));
+  EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
+            (TypeBuilder<const void*, false>::get(getGlobalContext())));
+  EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
+            (TypeBuilder<volatile void*, false>::get(getGlobalContext())));
+  EXPECT_EQ(Type::getInt8PtrTy(getGlobalContext()),
+            (TypeBuilder<const volatile void*, false>::get(
+              getGlobalContext())));
 }
 
 TEST(TypeBuilderTest, HostIntegers) {
diff --git a/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp b/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp
index 11b4dff..2e0450d 100644
--- a/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp
+++ b/libclamav/c++/llvm/unittests/VMCore/DerivedTypesTest.cpp
@@ -18,14 +18,16 @@ namespace {
 
 TEST(OpaqueTypeTest, RegisterWithContext) {
   LLVMContext C;
-  LLVMContextImpl *pImpl = C.pImpl;  
+  LLVMContextImpl *pImpl = C.pImpl;
 
-  EXPECT_EQ(0u, pImpl->OpaqueTypes.size());
+  // 1 refers to the AlwaysOpaqueTy allocated in the Context's constructor and
+  // destroyed in the destructor.
+  EXPECT_EQ(1u, pImpl->OpaqueTypes.size());
   {
     PATypeHolder Type = OpaqueType::get(C);
-    EXPECT_EQ(1u, pImpl->OpaqueTypes.size());
+    EXPECT_EQ(2u, pImpl->OpaqueTypes.size());
   }
-  EXPECT_EQ(0u, pImpl->OpaqueTypes.size());
+  EXPECT_EQ(1u, pImpl->OpaqueTypes.size());
 }
 
 }  // namespace
diff --git a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
index 078028a..3c4742c 100644
--- a/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
+++ b/libclamav/c++/llvm/utils/FileCheck/FileCheck.cpp
@@ -340,12 +340,10 @@ unsigned Pattern::ComputeMatchDistance(StringRef Buffer,
   if (ExampleString.empty())
     ExampleString = RegExStr;
 
-  unsigned Distance = 0;
-  for (unsigned i = 0, e = ExampleString.size(); i != e; ++i)
-    if (Buffer.substr(i, 1) != ExampleString.substr(i, 1))
-      ++Distance;
-
-  return Distance;
+  // Only compare up to the first line in the buffer, or the string size.
+  StringRef BufferPrefix = Buffer.substr(0, ExampleString.size());
+  BufferPrefix = BufferPrefix.split('\n').first;
+  return BufferPrefix.edit_distance(ExampleString);
 }
 
 void Pattern::PrintFailureInfo(const SourceMgr &SM, StringRef Buffer,
@@ -383,10 +381,15 @@ void Pattern::PrintFailureInfo(const SourceMgr &SM, StringRef Buffer,
   double BestQuality = 0;
 
   // Use an arbitrary 4k limit on how far we will search.
-  for (size_t i = 0, e = std::min(4096, int(Buffer.size())); i != e; ++i) {
+  for (size_t i = 0, e = std::min(size_t(4096), Buffer.size()); i != e; ++i) {
     if (Buffer[i] == '\n')
       ++NumLinesForward;
 
+    // Patterns have leading whitespace stripped, so skip whitespace when
+    // looking for something which looks like a pattern.
+    if (Buffer[i] == ' ' || Buffer[i] == '\t')
+      continue;
+
     // Compute the "quality" of this match as an arbitrary combination of the
     // match distance and the number of lines skipped to get to this match.
     unsigned Distance = ComputeMatchDistance(Buffer.substr(i), VariableTable);
@@ -529,6 +532,9 @@ static bool ReadCheckFile(SourceMgr &SM,
     // Scan ahead to the end of line.
     size_t EOL = Buffer.find_first_of("\n\r");
 
+    // Remember the location of the start of the pattern, for diagnostics.
+    SMLoc PatternLoc = SMLoc::getFromPointer(Buffer.data());
+
     // Parse the pattern.
     Pattern P;
     if (P.ParsePattern(Buffer.substr(0, EOL), SM))
@@ -555,7 +561,7 @@ static bool ReadCheckFile(SourceMgr &SM,
     
     // Okay, add the string we captured to the output vector and move on.
     CheckStrings.push_back(CheckString(P,
-                                       SMLoc::getFromPointer(Buffer.data()),
+                                       PatternLoc,
                                        IsCheckNext));
     std::swap(NotMatches, CheckStrings.back().NotStrings);
   }
diff --git a/libclamav/c++/llvm/utils/GenLibDeps.pl b/libclamav/c++/llvm/utils/GenLibDeps.pl
index 6d0b13e..b320a91 100755
--- a/libclamav/c++/llvm/utils/GenLibDeps.pl
+++ b/libclamav/c++/llvm/utils/GenLibDeps.pl
@@ -9,10 +9,12 @@
 # Syntax:   GenLibDeps.pl [-flat] <directory_with_libraries_in_it> [path_to_nm_binary]
 #
 use strict;
-
+use warnings;
 # Parse arguments... 
 my $FLAT = 0;
 my $WHY = 0;
+my $PEROBJ = 0;
+my $PEROBJINCL = 0;
 while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
   shift;
   last if /^--$/;  # Stop processing arguments on --
@@ -20,6 +22,8 @@ while (scalar(@ARGV) and ($_ = $ARGV[0], /^[-+]/)) {
   # List command line options here...
   if (/^-flat$/)     { $FLAT = 1; next; }
   if (/^-why/)       { $WHY = 1; $FLAT = 1; next; }
+  if (/^-perobj$/)    { $PEROBJ = 1; next; }
+  if (/^-perobjincl/) { $PEROBJINCL = 1; next;}
   print "Unknown option: $_ : ignoring!\n";
 }
 
@@ -47,6 +51,19 @@ if (!defined($nmPath) || $nmPath eq "") {
   die "Can't find 'nm'" if (! -x "$nmPath");
 }
 
+my $ranlibPath;
+if ($PEROBJ) {
+  $ranlibPath = $ARGV[2];
+  if (defined($ENV{RANLIB})) {
+    chomp($ranlibPath=$ENV{RANLIB});
+  }
+
+  if (!defined($ranlibPath) || $ranlibPath eq "") {
+    chomp($ranlibPath=`which ranlib`);
+    die "Can't find 'ranlib'" if (! -x "$ranlibPath");
+  }
+}
+
 # Open the directory and read its contents, sorting by name and differentiating
 # by whether its a library (.a) or an object file (.o)
 opendir DIR,$Directory;
@@ -60,6 +77,93 @@ my @objs = grep(/LLVM.*\.o$/,sort(@files));
 my %libdefs;
 my %objdefs;
 
+my %libobjs;
+my %objdeps=();
+# Gather library definitions at object file granularity (optional)
+if ($PEROBJ) {
+  foreach my $lib (@libs ) {
+    `$ranlibPath $Directory/$lib`;
+    my $libpath = $lib;
+    $libpath =~ s/^libLLVM(.*)\.a/$1/;
+    $libpath =~ s/(.+)CodeGen$/Target\/$1/;
+    $libpath =~ s/(.+)AsmPrinter$/Target\/$1\/AsmPrinter/;
+    $libpath =~ s/(.+)AsmParser$/Target\/$1\/AsmParser/;
+    $libpath =~ s/(.+)Info$/Target\/$1\/TargetInfo/;
+    $libpath =~ s/(.+)Disassembler$/Target\/$1\/Disassembler/;
+    $libpath =~ s/SelectionDAG/CodeGen\/SelectionDAG/;
+    $libpath =~ s/^AsmPrinter/CodeGen\/AsmPrinter/;
+    $libpath =~ s/^BitReader/Bitcode\/Reader/;
+    $libpath =~ s/^BitWriter/Bitcode\/Writer/;
+    $libpath =~ s/^CBackend/Target\/CBackend/;
+    $libpath =~ s/^CppBackend/Target\/CppBackend/;
+    $libpath =~ s/^MSIL/Target\/MSIL/;
+    $libpath =~ s/^Core/VMCore/;
+    $libpath =~ s/^Instrumentation/Transforms\/Instrumentation/;
+    $libpath =~ s/^Interpreter/ExecutionEngine\/Interpreter/;
+    $libpath =~ s/^JIT/ExecutionEngine\/JIT/;
+    $libpath =~ s/^ScalarOpts/Transforms\/Scalar/;
+    $libpath =~ s/^TransformUtils/Transforms\/Utils/;
+    $libpath =~ s/^ipa/Analysis\/IPA/;
+    $libpath =~ s/^ipo/Transforms\/IPO/;
+    $libpath =~ s/^pic16passes/Target\/PIC16\/PIC16Passes/;
+    $libpath = "lib/".$libpath."/";
+    open DEFS, "$nmPath -sg $Directory/$lib|";
+    while (<DEFS>) {
+      chomp;
+      if (/^([^ ]*) in ([^ ]*)/) {
+        my $objfile = $libpath.$2;
+        $objdefs{$1} = $objfile;
+        $objdeps{$objfile} = {};
+        $libobjs{$lib}{$objfile}=1;
+#        my $p = "../llvm/".$objfile;
+#        $p =~ s/Support\/reg(.*).o/Support\/reg$1.c/;
+#        $p =~ s/.o$/.cpp/;
+#        unless (-e $p) {
+#          die "$p\n"
+#        }
+      }
+    }
+    close DEFS or die "nm failed";
+  }
+  foreach my $lib (@libs ) {
+    my $libpath = $lib;
+    $libpath =~ s/^libLLVM(.*)\.a/$1/;
+    $libpath =~ s/(.+)CodeGen$/Target\/$1/;
+    $libpath =~ s/(.+)AsmPrinter$/Target\/$1\/AsmPrinter/;
+    $libpath =~ s/(.+)AsmParser$/Target\/$1\/AsmParser/;
+    $libpath =~ s/(.+)Info$/Target\/$1\/TargetInfo/;
+    $libpath =~ s/(.+)Disassembler$/Target\/$1\/Disassembler/;
+    $libpath =~ s/SelectionDAG/CodeGen\/SelectionDAG/;
+    $libpath =~ s/^AsmPrinter/CodeGen\/AsmPrinter/;
+    $libpath =~ s/^BitReader/Bitcode\/Reader/;
+    $libpath =~ s/^BitWriter/Bitcode\/Writer/;
+    $libpath =~ s/^CBackend/Target\/CBackend/;
+    $libpath =~ s/^CppBackend/Target\/CppBackend/;
+    $libpath =~ s/^MSIL/Target\/MSIL/;
+    $libpath =~ s/^Core/VMCore/;
+    $libpath =~ s/^Instrumentation/Transforms\/Instrumentation/;
+    $libpath =~ s/^Interpreter/ExecutionEngine\/Interpreter/;
+    $libpath =~ s/^JIT/ExecutionEngine\/JIT/;
+    $libpath =~ s/^ScalarOpts/Transforms\/Scalar/;
+    $libpath =~ s/^TransformUtils/Transforms\/Utils/;
+    $libpath =~ s/^ipa/Analysis\/IPA/;
+    $libpath =~ s/^ipo/Transforms\/IPO/;
+    $libpath =~ s/^pic16passes/Target\/PIC16\/PIC16Passes/;
+    $libpath = "lib/".$libpath."/";
+    open UDEFS, "$nmPath -Aup $Directory/$lib|";
+    while (<UDEFS>) {
+      chomp;
+      if (/:([^:]+):/) {
+        my $obj = $libpath.$1;
+        s/[^ ]+: *U //;
+        if (defined($objdefs{$_})) {
+          $objdeps{$obj}{$objdefs{$_}}=1;
+        }
+      }
+    }
+    close UDEFS or die "nm failed"
+  }
+} else {
 # Gather definitions from the libraries
 foreach my $lib (@libs ) {
   open DEFS, "$nmPath -g $Directory/$lib|";
@@ -72,6 +176,7 @@ foreach my $lib (@libs ) {
   }
   close DEFS or die "nm failed";
 }
+}
 
 # Gather definitions from the object files.
 foreach my $obj (@objs ) {
@@ -109,6 +214,11 @@ sub gen_one_entry {
       $DepLibs{$libdefs{$_}} = [] unless exists $DepLibs{$libdefs{$_}};
       push(@{$DepLibs{$libdefs{$_}}}, $_);
     } elsif (defined($objdefs{$_}) && $objdefs{$_} ne $lib) {
+      if ($PEROBJ && !$PEROBJINCL) {
+        # -perobjincl makes .a files depend on .o files they contain themselves
+        # default is don't depend on these.
+        next if defined $libobjs{$lib}{$objdefs{$_}};
+      }
       my $libroot = $lib;
       $libroot =~ s/lib(.*).a/$1/;
       if ($objdefs{$_} ne "$libroot.o") {
@@ -144,6 +254,25 @@ sub gen_one_entry {
     }
     close UNDEFS or die "nm failed";
   }
+  if ($PEROBJINCL) {
+     # include the .a's objects
+     for my $obj (keys %{$libobjs{$lib}}) {
+        $DepLibs{$obj} = ["<.a object>"] unless exists $DepLibs{$obj};
+     }
+     my $madechange = 1;
+     while($madechange) {
+      $madechange = 0;
+      my %temp = %DepLibs;
+      foreach my $obj (keys %DepLibs) {
+        foreach my $objdeps (keys %{$objdeps{$obj}}) {
+          next if defined $temp{$objdeps};
+          push(@{$temp{$objdeps}}, $obj);
+          $madechange = 1;
+        }
+      }
+      %DepLibs = %temp;
+     }
+  }
 
   for my $key (sort keys %DepLibs) {
     if ($FLAT) {
@@ -209,6 +338,18 @@ foreach my $lib (@libs) {
   gen_one_entry($lib);
 }
 
+if ($PEROBJ) {
+  foreach my $obj (keys %objdeps) {
+     print "$obj:";
+     if (!$PEROBJINCL) {
+      foreach my $dep (keys %{$objdeps{$obj}}) {
+          print " $dep";
+      }
+    }
+     print "\n";
+  }
+}
+
 if (!$FLAT) {
   print DOT "}\n";
   close DOT;
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmMatcherEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/AsmMatcherEmitter.cpp
index ce1521d..b823e57 100644
--- a/libclamav/c++/llvm/utils/TableGen/AsmMatcherEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/AsmMatcherEmitter.cpp
@@ -140,7 +140,7 @@ static std::string FlattenVariants(const std::string &AsmString,
 }
 
 /// TokenizeAsmString - Tokenize a simplified assembly string.
-static void TokenizeAsmString(const StringRef &AsmString, 
+static void TokenizeAsmString(StringRef AsmString, 
                               SmallVectorImpl<StringRef> &Tokens) {
   unsigned Prev = 0;
   bool InTok = true;
@@ -207,7 +207,7 @@ static void TokenizeAsmString(const StringRef &AsmString,
     Tokens.push_back(AsmString.substr(Prev));
 }
 
-static bool IsAssemblerInstruction(const StringRef &Name,
+static bool IsAssemblerInstruction(StringRef Name,
                                    const CodeGenInstruction &CGI, 
                                    const SmallVectorImpl<StringRef> &Tokens) {
   // Ignore "codegen only" instructions.
@@ -528,10 +528,10 @@ private:
 
 private:
   /// getTokenClass - Lookup or create the class for the given token.
-  ClassInfo *getTokenClass(const StringRef &Token);
+  ClassInfo *getTokenClass(StringRef Token);
 
   /// getOperandClass - Lookup or create the class for the given operand.
-  ClassInfo *getOperandClass(const StringRef &Token,
+  ClassInfo *getOperandClass(StringRef Token,
                              const CodeGenInstruction::OperandInfo &OI);
 
   /// BuildRegisterClasses - Build the ClassInfo* instances for register
@@ -581,7 +581,7 @@ void InstructionInfo::dump() {
   }
 }
 
-static std::string getEnumNameForToken(const StringRef &Str) {
+static std::string getEnumNameForToken(StringRef Str) {
   std::string Res;
   
   for (StringRef::iterator it = Str.begin(), ie = Str.end(); it != ie; ++it) {
@@ -603,7 +603,7 @@ static std::string getEnumNameForToken(const StringRef &Str) {
 }
 
 /// getRegisterRecord - Get the register record for \arg name, or 0.
-static Record *getRegisterRecord(CodeGenTarget &Target, const StringRef &Name) {
+static Record *getRegisterRecord(CodeGenTarget &Target, StringRef Name) {
   for (unsigned i = 0, e = Target.getRegisters().size(); i != e; ++i) {
     const CodeGenRegister &Reg = Target.getRegisters()[i];
     if (Name == Reg.TheDef->getValueAsString("AsmName"))
@@ -613,7 +613,7 @@ static Record *getRegisterRecord(CodeGenTarget &Target, const StringRef &Name) {
   return 0;
 }
 
-ClassInfo *AsmMatcherInfo::getTokenClass(const StringRef &Token) {
+ClassInfo *AsmMatcherInfo::getTokenClass(StringRef Token) {
   ClassInfo *&Entry = TokenClasses[Token];
   
   if (!Entry) {
@@ -631,7 +631,7 @@ ClassInfo *AsmMatcherInfo::getTokenClass(const StringRef &Token) {
 }
 
 ClassInfo *
-AsmMatcherInfo::getOperandClass(const StringRef &Token,
+AsmMatcherInfo::getOperandClass(StringRef Token,
                                 const CodeGenInstruction::OperandInfo &OI) {
   if (OI.Rec->isSubClassOf("RegisterClass")) {
     ClassInfo *CI = RegisterClassClasses[OI.Rec];
@@ -782,10 +782,16 @@ void AsmMatcherInfo::BuildRegisterClasses(CodeGenTarget &Target,
 void AsmMatcherInfo::BuildOperandClasses(CodeGenTarget &Target) {
   std::vector<Record*> AsmOperands;
   AsmOperands = Records.getAllDerivedDefinitions("AsmOperandClass");
+
+  // Pre-populate AsmOperandClasses map.
+  for (std::vector<Record*>::iterator it = AsmOperands.begin(), 
+         ie = AsmOperands.end(); it != ie; ++it)
+    AsmOperandClasses[*it] = new ClassInfo();
+
   unsigned Index = 0;
   for (std::vector<Record*>::iterator it = AsmOperands.begin(), 
          ie = AsmOperands.end(); it != ie; ++it, ++Index) {
-    ClassInfo *CI = new ClassInfo();
+    ClassInfo *CI = AsmOperandClasses[*it];
     CI->Kind = ClassInfo::UserClass0 + Index;
 
     Init *Super = (*it)->getValueInit("SuperClass");
@@ -938,10 +944,29 @@ void AsmMatcherInfo::BuildInfo(CodeGenTarget &Target) {
                           OperandName.str() + "'");
       }
 
-      const CodeGenInstruction::OperandInfo &OI = II->Instr->OperandList[Idx];
+      // FIXME: This is annoying, the named operand may be tied (e.g.,
+      // XCHG8rm). What we want is the untied operand, which we now have to
+      // grovel for. Only worry about this for single entry operands, we have to
+      // clean this up anyway.
+      const CodeGenInstruction::OperandInfo *OI = &II->Instr->OperandList[Idx];
+      if (OI->Constraints[0].isTied()) {
+        unsigned TiedOp = OI->Constraints[0].getTiedOperand();
+
+        // The tied operand index is an MIOperand index, find the operand that
+        // contains it.
+        for (unsigned i = 0, e = II->Instr->OperandList.size(); i != e; ++i) {
+          if (II->Instr->OperandList[i].MIOperandNo == TiedOp) {
+            OI = &II->Instr->OperandList[i];
+            break;
+          }
+        }
+
+        assert(OI && "Unable to find tied operand target!");
+      }
+
       InstructionInfo::Operand Op;
-      Op.Class = getOperandClass(Token, OI);
-      Op.OperandInfo = &OI;
+      Op.Class = getOperandClass(Token, *OI);
+      Op.OperandInfo = OI;
       II->Operands.push_back(Op);
     }
   }
@@ -950,6 +975,16 @@ void AsmMatcherInfo::BuildInfo(CodeGenTarget &Target) {
   std::sort(Classes.begin(), Classes.end(), less_ptr<ClassInfo>());
 }
 
+static std::pair<unsigned, unsigned> *
+GetTiedOperandAtIndex(SmallVectorImpl<std::pair<unsigned, unsigned> > &List,
+                      unsigned Index) {
+  for (unsigned i = 0, e = List.size(); i != e; ++i)
+    if (Index == List[i].first)
+      return &List[i];
+
+  return 0;
+}
+
 static void EmitConvertToMCInst(CodeGenTarget &Target,
                                 std::vector<InstructionInfo*> &Infos,
                                 raw_ostream &OS) {
@@ -990,6 +1025,19 @@ static void EmitConvertToMCInst(CodeGenTarget &Target,
       if (Op.OperandInfo)
         MIOperandList.push_back(std::make_pair(Op.OperandInfo->MIOperandNo, i));
     }
+
+    // Find any tied operands.
+    SmallVector<std::pair<unsigned, unsigned>, 4> TiedOperands;
+    for (unsigned i = 0, e = II.Instr->OperandList.size(); i != e; ++i) {
+      const CodeGenInstruction::OperandInfo &OpInfo = II.Instr->OperandList[i];
+      for (unsigned j = 0, e = OpInfo.Constraints.size(); j != e; ++j) {
+        const CodeGenInstruction::ConstraintInfo &CI = OpInfo.Constraints[j];
+        if (CI.isTied())
+          TiedOperands.push_back(std::make_pair(OpInfo.MIOperandNo + j,
+                                                CI.getTiedOperand()));
+      }
+    }
+
     std::sort(MIOperandList.begin(), MIOperandList.end());
 
     // Compute the total number of operands.
@@ -1008,14 +1056,20 @@ static void EmitConvertToMCInst(CodeGenTarget &Target,
       assert(CurIndex <= Op.OperandInfo->MIOperandNo &&
              "Duplicate match for instruction operand!");
       
-      Signature += "_";
-
       // Skip operands which weren't matched by anything, this occurs when the
       // .td file encodes "implicit" operands as explicit ones.
       //
       // FIXME: This should be removed from the MCInst structure.
-      for (; CurIndex != Op.OperandInfo->MIOperandNo; ++CurIndex)
-        Signature += "Imp";
+      for (; CurIndex != Op.OperandInfo->MIOperandNo; ++CurIndex) {
+        std::pair<unsigned, unsigned> *Tie = GetTiedOperandAtIndex(TiedOperands,
+                                                                   CurIndex);
+        if (!Tie)
+          Signature += "__Imp";
+        else
+          Signature += "__Tie" + utostr(Tie->second);
+      }
+
+      Signature += "__";
 
       // Registers are always converted the same, don't duplicate the conversion
       // function based on them.
@@ -1033,8 +1087,14 @@ static void EmitConvertToMCInst(CodeGenTarget &Target,
     }
 
     // Add any trailing implicit operands.
-    for (; CurIndex != NumMIOperands; ++CurIndex)
-      Signature += "Imp";
+    for (; CurIndex != NumMIOperands; ++CurIndex) {
+      std::pair<unsigned, unsigned> *Tie = GetTiedOperandAtIndex(TiedOperands,
+                                                                 CurIndex);
+      if (!Tie)
+        Signature += "__Imp";
+      else
+        Signature += "__Tie" + utostr(Tie->second);
+    }
 
     II.ConversionFnKind = Signature;
 
@@ -1054,8 +1114,22 @@ static void EmitConvertToMCInst(CodeGenTarget &Target,
       InstructionInfo::Operand &Op = II.Operands[MIOperandList[i].second];
 
       // Add the implicit operands.
-      for (; CurIndex != Op.OperandInfo->MIOperandNo; ++CurIndex)
-        CvtOS << "    Inst.addOperand(MCOperand::CreateReg(0));\n";
+      for (; CurIndex != Op.OperandInfo->MIOperandNo; ++CurIndex) {
+        // See if this is a tied operand.
+        std::pair<unsigned, unsigned> *Tie = GetTiedOperandAtIndex(TiedOperands,
+                                                                   CurIndex);
+
+        if (!Tie) {
+          // If not, this is some implicit operand. Just assume it is a register
+          // for now.
+          CvtOS << "    Inst.addOperand(MCOperand::CreateReg(0));\n";
+        } else {
+          // Copy the tied operand.
+          assert(Tie->first>Tie->second && "Tied operand preceeds its target!");
+          CvtOS << "    Inst.addOperand(Inst.getOperand("
+                << Tie->second << "));\n";
+        }
+      }
 
       CvtOS << "    ((" << TargetOperandClass << "*)Operands["
          << MIOperandList[i].second 
@@ -1065,8 +1139,22 @@ static void EmitConvertToMCInst(CodeGenTarget &Target,
     }
     
     // And add trailing implicit operands.
-    for (; CurIndex != NumMIOperands; ++CurIndex)
-      CvtOS << "    Inst.addOperand(MCOperand::CreateReg(0));\n";
+    for (; CurIndex != NumMIOperands; ++CurIndex) {
+      std::pair<unsigned, unsigned> *Tie = GetTiedOperandAtIndex(TiedOperands,
+                                                                 CurIndex);
+
+      if (!Tie) {
+        // If not, this is some implicit operand. Just assume it is a register
+        // for now.
+        CvtOS << "    Inst.addOperand(MCOperand::CreateReg(0));\n";
+      } else {
+        // Copy the tied operand.
+        assert(Tie->first>Tie->second && "Tied operand preceeds its target!");
+        CvtOS << "    Inst.addOperand(Inst.getOperand("
+              << Tie->second << "));\n";
+      }
+    }
+
     CvtOS << "    break;\n";
   }
 
@@ -1367,7 +1455,7 @@ static void EmitMatchTokenString(CodeGenTarget &Target,
       Matches.push_back(StringPair(CI.ValueName, "return " + CI.Name + ";"));
   }
 
-  OS << "static MatchClassKind MatchTokenString(const StringRef &Name) {\n";
+  OS << "static MatchClassKind MatchTokenString(StringRef Name) {\n";
 
   EmitStringMatcher("Name", Matches, OS);
 
@@ -1390,7 +1478,7 @@ static void EmitMatchRegisterName(CodeGenTarget &Target, Record *AsmParser,
                                  "return " + utostr(i + 1) + ";"));
   }
   
-  OS << "static unsigned MatchRegisterName(const StringRef &Name) {\n";
+  OS << "static unsigned MatchRegisterName(StringRef Name) {\n";
 
   EmitStringMatcher("Name", Matches, OS);
   
@@ -1407,9 +1495,11 @@ void AsmMatcherEmitter::run(raw_ostream &OS) {
   AsmMatcherInfo Info(AsmParser);
   Info.BuildInfo(Target);
 
-  // Sort the instruction table using the partial order on classes.
-  std::sort(Info.Instructions.begin(), Info.Instructions.end(),
-            less_ptr<InstructionInfo>());
+  // Sort the instruction table using the partial order on classes. We use
+  // stable_sort to ensure that ambiguous instructions are still
+  // deterministically ordered.
+  std::stable_sort(Info.Instructions.begin(), Info.Instructions.end(),
+                   less_ptr<InstructionInfo>());
   
   DEBUG_WITH_TYPE("instruction_info", {
       for (std::vector<InstructionInfo*>::iterator 
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
index ff83c76..3a38dd4 100644
--- a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.cpp
@@ -13,339 +13,15 @@
 //===----------------------------------------------------------------------===//
 
 #include "AsmWriterEmitter.h"
+#include "AsmWriterInst.h"
 #include "CodeGenTarget.h"
 #include "Record.h"
 #include "StringToOffsetTable.h"
-#include "llvm/ADT/StringExtras.h"
 #include "llvm/Support/Debug.h"
 #include "llvm/Support/MathExtras.h"
 #include <algorithm>
 using namespace llvm;
 
-
-static bool isIdentChar(char C) {
-  return (C >= 'a' && C <= 'z') ||
-         (C >= 'A' && C <= 'Z') ||
-         (C >= '0' && C <= '9') ||
-         C == '_';
-}
-
-// This should be an anon namespace, this works around a GCC warning.
-namespace llvm {  
-  struct AsmWriterOperand {
-    enum OpType {
-      // Output this text surrounded by quotes to the asm.
-      isLiteralTextOperand, 
-      // This is the name of a routine to call to print the operand.
-      isMachineInstrOperand,
-      // Output this text verbatim to the asm writer.  It is code that
-      // will output some text to the asm.
-      isLiteralStatementOperand
-    } OperandType;
-
-    /// Str - For isLiteralTextOperand, this IS the literal text.  For
-    /// isMachineInstrOperand, this is the PrinterMethodName for the operand..
-    /// For isLiteralStatementOperand, this is the code to insert verbatim 
-    /// into the asm writer.
-    std::string Str;
-
-    /// MiOpNo - For isMachineInstrOperand, this is the operand number of the
-    /// machine instruction.
-    unsigned MIOpNo;
-    
-    /// MiModifier - For isMachineInstrOperand, this is the modifier string for
-    /// an operand, specified with syntax like ${opname:modifier}.
-    std::string MiModifier;
-
-    // To make VS STL happy
-    AsmWriterOperand(OpType op = isLiteralTextOperand):OperandType(op) {}
-
-    AsmWriterOperand(const std::string &LitStr,
-                     OpType op = isLiteralTextOperand)
-      : OperandType(op), Str(LitStr) {}
-
-    AsmWriterOperand(const std::string &Printer, unsigned OpNo, 
-                     const std::string &Modifier,
-                     OpType op = isMachineInstrOperand) 
-      : OperandType(op), Str(Printer), MIOpNo(OpNo),
-      MiModifier(Modifier) {}
-
-    bool operator!=(const AsmWriterOperand &Other) const {
-      if (OperandType != Other.OperandType || Str != Other.Str) return true;
-      if (OperandType == isMachineInstrOperand)
-        return MIOpNo != Other.MIOpNo || MiModifier != Other.MiModifier;
-      return false;
-    }
-    bool operator==(const AsmWriterOperand &Other) const {
-      return !operator!=(Other);
-    }
-    
-    /// getCode - Return the code that prints this operand.
-    std::string getCode() const;
-  };
-}
-
-namespace llvm {
-  class AsmWriterInst {
-  public:
-    std::vector<AsmWriterOperand> Operands;
-    const CodeGenInstruction *CGI;
-
-    AsmWriterInst(const CodeGenInstruction &CGI, Record *AsmWriter);
-
-    /// MatchesAllButOneOp - If this instruction is exactly identical to the
-    /// specified instruction except for one differing operand, return the
-    /// differing operand number.  Otherwise return ~0.
-    unsigned MatchesAllButOneOp(const AsmWriterInst &Other) const;
-
-  private:
-    void AddLiteralString(const std::string &Str) {
-      // If the last operand was already a literal text string, append this to
-      // it, otherwise add a new operand.
-      if (!Operands.empty() &&
-          Operands.back().OperandType == AsmWriterOperand::isLiteralTextOperand)
-        Operands.back().Str.append(Str);
-      else
-        Operands.push_back(AsmWriterOperand(Str));
-    }
-  };
-}
-
-
-std::string AsmWriterOperand::getCode() const {
-  if (OperandType == isLiteralTextOperand) {
-    if (Str.size() == 1)
-      return "O << '" + Str + "'; ";
-    return "O << \"" + Str + "\"; ";
-  }
-
-  if (OperandType == isLiteralStatementOperand)
-    return Str;
-
-  std::string Result = Str + "(MI";
-  if (MIOpNo != ~0U)
-    Result += ", " + utostr(MIOpNo);
-  if (!MiModifier.empty())
-    Result += ", \"" + MiModifier + '"';
-  return Result + "); ";
-}
-
-
-/// ParseAsmString - Parse the specified Instruction's AsmString into this
-/// AsmWriterInst.
-///
-AsmWriterInst::AsmWriterInst(const CodeGenInstruction &CGI, Record *AsmWriter) {
-  this->CGI = &CGI;
-  
-  unsigned Variant       = AsmWriter->getValueAsInt("Variant");
-  int FirstOperandColumn = AsmWriter->getValueAsInt("FirstOperandColumn");
-  int OperandSpacing     = AsmWriter->getValueAsInt("OperandSpacing");
-  
-  unsigned CurVariant = ~0U;  // ~0 if we are outside a {.|.|.} region, other #.
-
-  // This is the number of tabs we've seen if we're doing columnar layout.
-  unsigned CurColumn = 0;
-  
-  
-  // NOTE: Any extensions to this code need to be mirrored in the 
-  // AsmPrinter::printInlineAsm code that executes as compile time (assuming
-  // that inline asm strings should also get the new feature)!
-  const std::string &AsmString = CGI.AsmString;
-  std::string::size_type LastEmitted = 0;
-  while (LastEmitted != AsmString.size()) {
-    std::string::size_type DollarPos =
-      AsmString.find_first_of("${|}\\", LastEmitted);
-    if (DollarPos == std::string::npos) DollarPos = AsmString.size();
-
-    // Emit a constant string fragment.
-
-    if (DollarPos != LastEmitted) {
-      if (CurVariant == Variant || CurVariant == ~0U) {
-        for (; LastEmitted != DollarPos; ++LastEmitted)
-          switch (AsmString[LastEmitted]) {
-          case '\n':
-            AddLiteralString("\\n");
-            break;
-          case '\t':
-            // If the asm writer is not using a columnar layout, \t is not
-            // magic.
-            if (FirstOperandColumn == -1 || OperandSpacing == -1) {
-              AddLiteralString("\\t");
-            } else {
-              // We recognize a tab as an operand delimeter.
-              unsigned DestColumn = FirstOperandColumn + 
-                                    CurColumn++ * OperandSpacing;
-              Operands.push_back(
-                AsmWriterOperand("O.PadToColumn(" +
-                                 utostr(DestColumn) + ");\n",
-                                 AsmWriterOperand::isLiteralStatementOperand));
-            }
-            break;
-          case '"':
-            AddLiteralString("\\\"");
-            break;
-          case '\\':
-            AddLiteralString("\\\\");
-            break;
-          default:
-            AddLiteralString(std::string(1, AsmString[LastEmitted]));
-            break;
-          }
-      } else {
-        LastEmitted = DollarPos;
-      }
-    } else if (AsmString[DollarPos] == '\\') {
-      if (DollarPos+1 != AsmString.size() &&
-          (CurVariant == Variant || CurVariant == ~0U)) {
-        if (AsmString[DollarPos+1] == 'n') {
-          AddLiteralString("\\n");
-        } else if (AsmString[DollarPos+1] == 't') {
-          // If the asm writer is not using a columnar layout, \t is not
-          // magic.
-          if (FirstOperandColumn == -1 || OperandSpacing == -1) {
-            AddLiteralString("\\t");
-            break;
-          }
-            
-          // We recognize a tab as an operand delimeter.
-          unsigned DestColumn = FirstOperandColumn + 
-                                CurColumn++ * OperandSpacing;
-          Operands.push_back(
-            AsmWriterOperand("O.PadToColumn(" + utostr(DestColumn) + ");\n",
-                             AsmWriterOperand::isLiteralStatementOperand));
-          break;
-        } else if (std::string("${|}\\").find(AsmString[DollarPos+1]) 
-                   != std::string::npos) {
-          AddLiteralString(std::string(1, AsmString[DollarPos+1]));
-        } else {
-          throw "Non-supported escaped character found in instruction '" +
-            CGI.TheDef->getName() + "'!";
-        }
-        LastEmitted = DollarPos+2;
-        continue;
-      }
-    } else if (AsmString[DollarPos] == '{') {
-      if (CurVariant != ~0U)
-        throw "Nested variants found for instruction '" +
-              CGI.TheDef->getName() + "'!";
-      LastEmitted = DollarPos+1;
-      CurVariant = 0;   // We are now inside of the variant!
-    } else if (AsmString[DollarPos] == '|') {
-      if (CurVariant == ~0U)
-        throw "'|' character found outside of a variant in instruction '"
-          + CGI.TheDef->getName() + "'!";
-      ++CurVariant;
-      ++LastEmitted;
-    } else if (AsmString[DollarPos] == '}') {
-      if (CurVariant == ~0U)
-        throw "'}' character found outside of a variant in instruction '"
-          + CGI.TheDef->getName() + "'!";
-      ++LastEmitted;
-      CurVariant = ~0U;
-    } else if (DollarPos+1 != AsmString.size() &&
-               AsmString[DollarPos+1] == '$') {
-      if (CurVariant == Variant || CurVariant == ~0U) {
-        AddLiteralString("$");  // "$$" -> $
-      }
-      LastEmitted = DollarPos+2;
-    } else {
-      // Get the name of the variable.
-      std::string::size_type VarEnd = DollarPos+1;
- 
-      // handle ${foo}bar as $foo by detecting whether the character following
-      // the dollar sign is a curly brace.  If so, advance VarEnd and DollarPos
-      // so the variable name does not contain the leading curly brace.
-      bool hasCurlyBraces = false;
-      if (VarEnd < AsmString.size() && '{' == AsmString[VarEnd]) {
-        hasCurlyBraces = true;
-        ++DollarPos;
-        ++VarEnd;
-      }
-
-      while (VarEnd < AsmString.size() && isIdentChar(AsmString[VarEnd]))
-        ++VarEnd;
-      std::string VarName(AsmString.begin()+DollarPos+1,
-                          AsmString.begin()+VarEnd);
-
-      // Modifier - Support ${foo:modifier} syntax, where "modifier" is passed
-      // into printOperand.  Also support ${:feature}, which is passed into
-      // PrintSpecial.
-      std::string Modifier;
-      
-      // In order to avoid starting the next string at the terminating curly
-      // brace, advance the end position past it if we found an opening curly
-      // brace.
-      if (hasCurlyBraces) {
-        if (VarEnd >= AsmString.size())
-          throw "Reached end of string before terminating curly brace in '"
-                + CGI.TheDef->getName() + "'";
-        
-        // Look for a modifier string.
-        if (AsmString[VarEnd] == ':') {
-          ++VarEnd;
-          if (VarEnd >= AsmString.size())
-            throw "Reached end of string before terminating curly brace in '"
-              + CGI.TheDef->getName() + "'";
-          
-          unsigned ModifierStart = VarEnd;
-          while (VarEnd < AsmString.size() && isIdentChar(AsmString[VarEnd]))
-            ++VarEnd;
-          Modifier = std::string(AsmString.begin()+ModifierStart,
-                                 AsmString.begin()+VarEnd);
-          if (Modifier.empty())
-            throw "Bad operand modifier name in '"+ CGI.TheDef->getName() + "'";
-        }
-        
-        if (AsmString[VarEnd] != '}')
-          throw "Variable name beginning with '{' did not end with '}' in '"
-                + CGI.TheDef->getName() + "'";
-        ++VarEnd;
-      }
-      if (VarName.empty() && Modifier.empty())
-        throw "Stray '$' in '" + CGI.TheDef->getName() +
-              "' asm string, maybe you want $$?";
-
-      if (VarName.empty()) {
-        // Just a modifier, pass this into PrintSpecial.
-        Operands.push_back(AsmWriterOperand("PrintSpecial", ~0U, Modifier));
-      } else {
-        // Otherwise, normal operand.
-        unsigned OpNo = CGI.getOperandNamed(VarName);
-        CodeGenInstruction::OperandInfo OpInfo = CGI.OperandList[OpNo];
-
-        if (CurVariant == Variant || CurVariant == ~0U) {
-          unsigned MIOp = OpInfo.MIOperandNo;
-          Operands.push_back(AsmWriterOperand(OpInfo.PrinterMethodName, MIOp,
-                                              Modifier));
-        }
-      }
-      LastEmitted = VarEnd;
-    }
-  }
-  
-  Operands.push_back(AsmWriterOperand("return;",
-                                  AsmWriterOperand::isLiteralStatementOperand));
-}
-
-/// MatchesAllButOneOp - If this instruction is exactly identical to the
-/// specified instruction except for one differing operand, return the differing
-/// operand number.  If more than one operand mismatches, return ~1, otherwise
-/// if the instructions are identical return ~0.
-unsigned AsmWriterInst::MatchesAllButOneOp(const AsmWriterInst &Other)const{
-  if (Operands.size() != Other.Operands.size()) return ~1;
-
-  unsigned MismatchOperand = ~0U;
-  for (unsigned i = 0, e = Operands.size(); i != e; ++i) {
-    if (Operands[i] != Other.Operands[i]) {
-      if (MismatchOperand != ~0U)  // Already have one mismatch?
-        return ~1U;
-      else
-        MismatchOperand = i;
-    }
-  }
-  return MismatchOperand;
-}
-
 static void PrintCases(std::vector<std::pair<std::string,
                        AsmWriterOperand> > &OpsToPrint, raw_ostream &O) {
   O << "    case " << OpsToPrint.back().first << ": ";
@@ -580,7 +256,11 @@ void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
          E = Target.inst_end(); I != E; ++I)
     if (!I->second.AsmString.empty() &&
         I->second.TheDef->getName() != "PHI")
-      Instructions.push_back(AsmWriterInst(I->second, AsmWriter));
+      Instructions.push_back(
+        AsmWriterInst(I->second, 
+                      AsmWriter->getValueAsInt("Variant"),
+                      AsmWriter->getValueAsInt("FirstOperandColumn"),
+                      AsmWriter->getValueAsInt("OperandSpacing")));
 
   // Get the instruction numbering.
   Target.getInstructionsByEnumValue(NumberedInstructions);
@@ -692,24 +372,6 @@ void AsmWriterEmitter::EmitPrintInstruction(raw_ostream &O) {
   StringTable.EmitString(O);
   O << ";\n\n";
 
-  O << "\n#ifndef NO_ASM_WRITER_BOILERPLATE\n";
-  
-  O << "  if (MI->getOpcode() == TargetInstrInfo::INLINEASM) {\n"
-    << "    printInlineAsm(MI);\n"
-    << "    return;\n"
-    << "  } else if (MI->isLabel()) {\n"
-    << "    printLabel(MI);\n"
-    << "    return;\n"
-    << "  } else if (MI->getOpcode() == TargetInstrInfo::IMPLICIT_DEF) {\n"
-    << "    printImplicitDef(MI);\n"
-    << "    return;\n"
-    << "  } else if (MI->getOpcode() == TargetInstrInfo::KILL) {\n"
-    << "    printKill(MI);\n"
-    << "    return;\n"
-    << "  }\n\n";
-
-  O << "\n#endif\n";
-
   O << "  O << \"\\t\";\n\n";
 
   O << "  // Emit the opcode for the instruction.\n"
@@ -832,11 +494,55 @@ void AsmWriterEmitter::EmitGetRegisterName(raw_ostream &O) {
     << "}\n";
 }
 
+void AsmWriterEmitter::EmitGetInstructionName(raw_ostream &O) {
+  CodeGenTarget Target;
+  Record *AsmWriter = Target.getAsmWriter();
+  std::string ClassName = AsmWriter->getValueAsString("AsmWriterClassName");
+
+  std::vector<const CodeGenInstruction*> NumberedInstructions;
+  Target.getInstructionsByEnumValue(NumberedInstructions);
+  
+  StringToOffsetTable StringTable;
+  O <<
+"\n\n#ifdef GET_INSTRUCTION_NAME\n"
+"#undef GET_INSTRUCTION_NAME\n\n"
+"/// getInstructionName: This method is automatically generated by tblgen\n"
+"/// from the instruction set description.  This returns the enum name of the\n"
+"/// specified instruction.\n"
+  "const char *" << Target.getName() << ClassName
+  << "::getInstructionName(unsigned Opcode) {\n"
+  << "  assert(Opcode < " << NumberedInstructions.size()
+  << " && \"Invalid instruction number!\");\n"
+  << "\n"
+  << "  static const unsigned InstAsmOffset[] = {";
+  for (unsigned i = 0, e = NumberedInstructions.size(); i != e; ++i) {
+    const CodeGenInstruction &Inst = *NumberedInstructions[i];
+    
+    std::string AsmName = Inst.TheDef->getName();
+    if ((i % 14) == 0)
+      O << "\n    ";
+    
+    O << StringTable.GetOrAddStringOffset(AsmName) << ", ";
+  }
+  O << "0\n"
+  << "  };\n"
+  << "\n";
+  
+  O << "  const char *Strs =\n";
+  StringTable.EmitString(O);
+  O << ";\n";
+  
+  O << "  return Strs+InstAsmOffset[Opcode];\n"
+  << "}\n\n#endif\n";
+}
+
+
 
 void AsmWriterEmitter::run(raw_ostream &O) {
   EmitSourceFileHeader("Assembly Writer Source Fragment", O);
   
   EmitPrintInstruction(O);
   EmitGetRegisterName(O);
+  EmitGetInstructionName(O);
 }
 
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.h b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.h
index 7862caa..9f7d776 100644
--- a/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.h
+++ b/libclamav/c++/llvm/utils/TableGen/AsmWriterEmitter.h
@@ -37,6 +37,7 @@ namespace llvm {
 private:
     void EmitPrintInstruction(raw_ostream &o);
     void EmitGetRegisterName(raw_ostream &o);
+    void EmitGetInstructionName(raw_ostream &o);
     
     AsmWriterInst *getAsmWriterInstByID(unsigned ID) const {
       assert(ID < NumberedInstructions.size());
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.cpp b/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.cpp
new file mode 100644
index 0000000..508e453
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.cpp
@@ -0,0 +1,264 @@
+//===- AsmWriterInst.h - Classes encapsulating a printable inst -----------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// These classes implement a parser for assembly strings.
+//
+//===----------------------------------------------------------------------===//
+
+#include "AsmWriterInst.h"
+#include "CodeGenTarget.h"
+#include "Record.h"
+#include "llvm/ADT/StringExtras.h"
+
+using namespace llvm;
+
+static bool isIdentChar(char C) {
+  return (C >= 'a' && C <= 'z') ||
+  (C >= 'A' && C <= 'Z') ||
+  (C >= '0' && C <= '9') ||
+  C == '_';
+}
+
+std::string AsmWriterOperand::getCode() const {
+  if (OperandType == isLiteralTextOperand) {
+    if (Str.size() == 1)
+      return "O << '" + Str + "'; ";
+    return "O << \"" + Str + "\"; ";
+  }
+  
+  if (OperandType == isLiteralStatementOperand)
+    return Str;
+  
+  std::string Result = Str + "(MI";
+  if (MIOpNo != ~0U)
+    Result += ", " + utostr(MIOpNo);
+  if (!MiModifier.empty())
+    Result += ", \"" + MiModifier + '"';
+  return Result + "); ";
+}
+
+/// ParseAsmString - Parse the specified Instruction's AsmString into this
+/// AsmWriterInst.
+///
+AsmWriterInst::AsmWriterInst(const CodeGenInstruction &CGI,
+                             unsigned Variant,
+                             int FirstOperandColumn,
+                             int OperandSpacing) {
+  this->CGI = &CGI;
+  
+  unsigned CurVariant = ~0U;  // ~0 if we are outside a {.|.|.} region, other #.
+  
+  // This is the number of tabs we've seen if we're doing columnar layout.
+  unsigned CurColumn = 0;
+  
+  
+  // NOTE: Any extensions to this code need to be mirrored in the 
+  // AsmPrinter::printInlineAsm code that executes as compile time (assuming
+  // that inline asm strings should also get the new feature)!
+  const std::string &AsmString = CGI.AsmString;
+  std::string::size_type LastEmitted = 0;
+  while (LastEmitted != AsmString.size()) {
+    std::string::size_type DollarPos =
+    AsmString.find_first_of("${|}\\", LastEmitted);
+    if (DollarPos == std::string::npos) DollarPos = AsmString.size();
+    
+    // Emit a constant string fragment.
+    
+    if (DollarPos != LastEmitted) {
+      if (CurVariant == Variant || CurVariant == ~0U) {
+        for (; LastEmitted != DollarPos; ++LastEmitted)
+          switch (AsmString[LastEmitted]) {
+            case '\n':
+              AddLiteralString("\\n");
+              break;
+            case '\t':
+              // If the asm writer is not using a columnar layout, \t is not
+              // magic.
+              if (FirstOperandColumn == -1 || OperandSpacing == -1) {
+                AddLiteralString("\\t");
+              } else {
+                // We recognize a tab as an operand delimeter.
+                unsigned DestColumn = FirstOperandColumn + 
+                CurColumn++ * OperandSpacing;
+                Operands.push_back(
+                  AsmWriterOperand(
+                    "O.PadToColumn(" +
+                    utostr(DestColumn) + ");\n",
+                    AsmWriterOperand::isLiteralStatementOperand));
+              }
+              break;
+            case '"':
+              AddLiteralString("\\\"");
+              break;
+            case '\\':
+              AddLiteralString("\\\\");
+              break;
+            default:
+              AddLiteralString(std::string(1, AsmString[LastEmitted]));
+              break;
+          }
+      } else {
+        LastEmitted = DollarPos;
+      }
+    } else if (AsmString[DollarPos] == '\\') {
+      if (DollarPos+1 != AsmString.size() &&
+          (CurVariant == Variant || CurVariant == ~0U)) {
+        if (AsmString[DollarPos+1] == 'n') {
+          AddLiteralString("\\n");
+        } else if (AsmString[DollarPos+1] == 't') {
+          // If the asm writer is not using a columnar layout, \t is not
+          // magic.
+          if (FirstOperandColumn == -1 || OperandSpacing == -1) {
+            AddLiteralString("\\t");
+            break;
+          }
+          
+          // We recognize a tab as an operand delimeter.
+          unsigned DestColumn = FirstOperandColumn + 
+          CurColumn++ * OperandSpacing;
+          Operands.push_back(
+            AsmWriterOperand("O.PadToColumn(" + utostr(DestColumn) + ");\n",
+                             AsmWriterOperand::isLiteralStatementOperand));
+          break;
+        } else if (std::string("${|}\\").find(AsmString[DollarPos+1]) 
+                   != std::string::npos) {
+          AddLiteralString(std::string(1, AsmString[DollarPos+1]));
+        } else {
+          throw "Non-supported escaped character found in instruction '" +
+          CGI.TheDef->getName() + "'!";
+        }
+        LastEmitted = DollarPos+2;
+        continue;
+      }
+    } else if (AsmString[DollarPos] == '{') {
+      if (CurVariant != ~0U)
+        throw "Nested variants found for instruction '" +
+        CGI.TheDef->getName() + "'!";
+      LastEmitted = DollarPos+1;
+      CurVariant = 0;   // We are now inside of the variant!
+    } else if (AsmString[DollarPos] == '|') {
+      if (CurVariant == ~0U)
+        throw "'|' character found outside of a variant in instruction '"
+        + CGI.TheDef->getName() + "'!";
+      ++CurVariant;
+      ++LastEmitted;
+    } else if (AsmString[DollarPos] == '}') {
+      if (CurVariant == ~0U)
+        throw "'}' character found outside of a variant in instruction '"
+        + CGI.TheDef->getName() + "'!";
+      ++LastEmitted;
+      CurVariant = ~0U;
+    } else if (DollarPos+1 != AsmString.size() &&
+               AsmString[DollarPos+1] == '$') {
+      if (CurVariant == Variant || CurVariant == ~0U) {
+        AddLiteralString("$");  // "$$" -> $
+      }
+      LastEmitted = DollarPos+2;
+    } else {
+      // Get the name of the variable.
+      std::string::size_type VarEnd = DollarPos+1;
+      
+      // handle ${foo}bar as $foo by detecting whether the character following
+      // the dollar sign is a curly brace.  If so, advance VarEnd and DollarPos
+      // so the variable name does not contain the leading curly brace.
+      bool hasCurlyBraces = false;
+      if (VarEnd < AsmString.size() && '{' == AsmString[VarEnd]) {
+        hasCurlyBraces = true;
+        ++DollarPos;
+        ++VarEnd;
+      }
+      
+      while (VarEnd < AsmString.size() && isIdentChar(AsmString[VarEnd]))
+        ++VarEnd;
+      std::string VarName(AsmString.begin()+DollarPos+1,
+                          AsmString.begin()+VarEnd);
+      
+      // Modifier - Support ${foo:modifier} syntax, where "modifier" is passed
+      // into printOperand.  Also support ${:feature}, which is passed into
+      // PrintSpecial.
+      std::string Modifier;
+      
+      // In order to avoid starting the next string at the terminating curly
+      // brace, advance the end position past it if we found an opening curly
+      // brace.
+      if (hasCurlyBraces) {
+        if (VarEnd >= AsmString.size())
+          throw "Reached end of string before terminating curly brace in '"
+          + CGI.TheDef->getName() + "'";
+        
+        // Look for a modifier string.
+        if (AsmString[VarEnd] == ':') {
+          ++VarEnd;
+          if (VarEnd >= AsmString.size())
+            throw "Reached end of string before terminating curly brace in '"
+            + CGI.TheDef->getName() + "'";
+          
+          unsigned ModifierStart = VarEnd;
+          while (VarEnd < AsmString.size() && isIdentChar(AsmString[VarEnd]))
+            ++VarEnd;
+          Modifier = std::string(AsmString.begin()+ModifierStart,
+                                 AsmString.begin()+VarEnd);
+          if (Modifier.empty())
+            throw "Bad operand modifier name in '"+ CGI.TheDef->getName() + "'";
+        }
+        
+        if (AsmString[VarEnd] != '}')
+          throw "Variable name beginning with '{' did not end with '}' in '"
+          + CGI.TheDef->getName() + "'";
+        ++VarEnd;
+      }
+      if (VarName.empty() && Modifier.empty())
+        throw "Stray '$' in '" + CGI.TheDef->getName() +
+        "' asm string, maybe you want $$?";
+      
+      if (VarName.empty()) {
+        // Just a modifier, pass this into PrintSpecial.
+        Operands.push_back(AsmWriterOperand("PrintSpecial", 
+                                            ~0U, 
+                                            ~0U, 
+                                            Modifier));
+      } else {
+        // Otherwise, normal operand.
+        unsigned OpNo = CGI.getOperandNamed(VarName);
+        CodeGenInstruction::OperandInfo OpInfo = CGI.OperandList[OpNo];
+        
+        if (CurVariant == Variant || CurVariant == ~0U) {
+          unsigned MIOp = OpInfo.MIOperandNo;
+          Operands.push_back(AsmWriterOperand(OpInfo.PrinterMethodName, 
+                                              OpNo,
+                                              MIOp,
+                                              Modifier));
+        }
+      }
+      LastEmitted = VarEnd;
+    }
+  }
+  
+  Operands.push_back(AsmWriterOperand("return;",
+    AsmWriterOperand::isLiteralStatementOperand));
+}
+
+/// MatchesAllButOneOp - If this instruction is exactly identical to the
+/// specified instruction except for one differing operand, return the differing
+/// operand number.  If more than one operand mismatches, return ~1, otherwise
+/// if the instructions are identical return ~0.
+unsigned AsmWriterInst::MatchesAllButOneOp(const AsmWriterInst &Other)const{
+  if (Operands.size() != Other.Operands.size()) return ~1;
+  
+  unsigned MismatchOperand = ~0U;
+  for (unsigned i = 0, e = Operands.size(); i != e; ++i) {
+    if (Operands[i] != Other.Operands[i]) {
+      if (MismatchOperand != ~0U)  // Already have one mismatch?
+        return ~1U;
+      else
+        MismatchOperand = i;
+    }
+  }
+  return MismatchOperand;
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.h b/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.h
new file mode 100644
index 0000000..20b8588
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/AsmWriterInst.h
@@ -0,0 +1,113 @@
+//===- AsmWriterInst.h - Classes encapsulating a printable inst -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// These classes implement a parser for assembly strings.  The parser splits
+// the string into operands, which can be literal strings (the constant bits of
+// the string), actual operands (i.e., operands from the MachineInstr), and
+// dynamically-generated text, specified by raw C++ code.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef ASMWRITER_INST_H
+#define ASMWRITER_INST_H
+
+#include <string>
+#include <vector>
+
+namespace llvm {
+  class CodeGenInstruction;
+  class Record;
+  
+  struct AsmWriterOperand {
+    enum OpType {
+      // Output this text surrounded by quotes to the asm.
+      isLiteralTextOperand, 
+      // This is the name of a routine to call to print the operand.
+      isMachineInstrOperand,
+      // Output this text verbatim to the asm writer.  It is code that
+      // will output some text to the asm.
+      isLiteralStatementOperand
+    } OperandType;
+    
+    /// Str - For isLiteralTextOperand, this IS the literal text.  For
+    /// isMachineInstrOperand, this is the PrinterMethodName for the operand..
+    /// For isLiteralStatementOperand, this is the code to insert verbatim 
+    /// into the asm writer.
+    std::string Str;
+    
+    /// CGIOpNo - For isMachineInstrOperand, this is the index of the operand in
+    /// the CodeGenInstruction.
+    unsigned CGIOpNo;
+    
+    /// MiOpNo - For isMachineInstrOperand, this is the operand number of the
+    /// machine instruction.
+    unsigned MIOpNo;
+    
+    /// MiModifier - For isMachineInstrOperand, this is the modifier string for
+    /// an operand, specified with syntax like ${opname:modifier}.
+    std::string MiModifier;
+    
+    // To make VS STL happy
+    AsmWriterOperand(OpType op = isLiteralTextOperand):OperandType(op) {}
+    
+    AsmWriterOperand(const std::string &LitStr,
+                     OpType op = isLiteralTextOperand)
+    : OperandType(op), Str(LitStr) {}
+    
+    AsmWriterOperand(const std::string &Printer,
+                     unsigned _CGIOpNo,
+                     unsigned _MIOpNo,
+                     const std::string &Modifier,
+                     OpType op = isMachineInstrOperand) 
+    : OperandType(op), Str(Printer), CGIOpNo(_CGIOpNo), MIOpNo(_MIOpNo),
+    MiModifier(Modifier) {}
+    
+    bool operator!=(const AsmWriterOperand &Other) const {
+      if (OperandType != Other.OperandType || Str != Other.Str) return true;
+      if (OperandType == isMachineInstrOperand)
+        return MIOpNo != Other.MIOpNo || MiModifier != Other.MiModifier;
+      return false;
+    }
+    bool operator==(const AsmWriterOperand &Other) const {
+      return !operator!=(Other);
+    }
+    
+    /// getCode - Return the code that prints this operand.
+    std::string getCode() const;
+  };
+  
+  class AsmWriterInst {
+  public:
+    std::vector<AsmWriterOperand> Operands;
+    const CodeGenInstruction *CGI;
+    
+    AsmWriterInst(const CodeGenInstruction &CGI, 
+                  unsigned Variant,
+                  int FirstOperandColumn,
+                  int OperandSpacing);
+    
+    /// MatchesAllButOneOp - If this instruction is exactly identical to the
+    /// specified instruction except for one differing operand, return the
+    /// differing operand number.  Otherwise return ~0.
+    unsigned MatchesAllButOneOp(const AsmWriterInst &Other) const;
+    
+  private:
+    void AddLiteralString(const std::string &Str) {
+      // If the last operand was already a literal text string, append this to
+      // it, otherwise add a new operand.
+      if (!Operands.empty() &&
+          Operands.back().OperandType == AsmWriterOperand::isLiteralTextOperand)
+        Operands.back().Str.append(Str);
+      else
+        Operands.push_back(AsmWriterOperand(Str));
+    }
+  };
+}
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
index ce9b66f..a2678a2 100644
--- a/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
+++ b/libclamav/c++/llvm/utils/TableGen/CMakeLists.txt
@@ -1,6 +1,7 @@
 add_executable(tblgen
   AsmMatcherEmitter.cpp
   AsmWriterEmitter.cpp
+  AsmWriterInst.cpp
   CallingConvEmitter.cpp
   ClangDiagnosticsEmitter.cpp
   CodeEmitterGen.cpp
@@ -8,7 +9,11 @@ add_executable(tblgen
   CodeGenInstruction.cpp
   CodeGenTarget.cpp
   DAGISelEmitter.cpp
+  DAGISelMatcherEmitter.cpp
+  DAGISelMatcherGen.cpp
+  DAGISelMatcher.cpp
   DisassemblerEmitter.cpp
+  EDEmitter.cpp
   FastISelEmitter.cpp
   InstrEnumEmitter.cpp
   InstrInfoEmitter.cpp
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
index 714a39c..f1857f5 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeEmitterGen.cpp
@@ -35,7 +35,7 @@ void CodeEmitterGen::reverseBits(std::vector<Record*> &Insts) {
         R->getName() == "IMPLICIT_DEF" ||
         R->getName() == "SUBREG_TO_REG" ||
         R->getName() == "COPY_TO_REGCLASS" ||
-        R->getName() == "DEBUG_VALUE") continue;
+        R->getName() == "DBG_VALUE") continue;
 
     BitsInit *BI = R->getValueAsBitsInit("Inst");
 
@@ -113,7 +113,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
         R->getName() == "IMPLICIT_DEF" ||
         R->getName() == "SUBREG_TO_REG" ||
         R->getName() == "COPY_TO_REGCLASS" ||
-        R->getName() == "DEBUG_VALUE") {
+        R->getName() == "DBG_VALUE") {
       o << "    0U,\n";
       continue;
     }
@@ -152,7 +152,7 @@ void CodeEmitterGen::run(raw_ostream &o) {
         InstName == "IMPLICIT_DEF" ||
         InstName == "SUBREG_TO_REG" ||
         InstName == "COPY_TO_REGCLASS" ||
-        InstName == "DEBUG_VALUE") continue;
+        InstName == "DBG_VALUE") continue;
 
     BitsInit *BI = R->getValueAsBitsInit("Inst");
     const std::vector<RecordVal> &Vals = R->getValues();
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
index cf79365..94d3534 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.cpp
@@ -674,6 +674,15 @@ TreePatternNode *TreePatternNode::clone() const {
   return New;
 }
 
+/// RemoveAllTypes - Recursively strip all the types of this tree.
+void TreePatternNode::RemoveAllTypes() {
+  removeTypes();
+  if (isLeaf()) return;
+  for (unsigned i = 0, e = getNumChildren(); i != e; ++i)
+    getChild(i)->RemoveAllTypes();
+}
+
+
 /// SubstituteFormalArguments - Replace the formal arguments in this tree
 /// with actual values specified by ArgMap.
 void TreePatternNode::
@@ -768,7 +777,7 @@ TreePatternNode *TreePatternNode::InlinePatternFragments(TreePattern &TP) {
 /// references from the register file information, for example.
 ///
 static std::vector<unsigned char> getImplicitType(Record *R, bool NotRegisters,
-                                      TreePattern &TP) {
+                                                  TreePattern &TP) {
   // Some common return values
   std::vector<unsigned char> Unknown(1, EEVT::isUnknown);
   std::vector<unsigned char> Other(1, MVT::Other);
@@ -825,6 +834,48 @@ getIntrinsicInfo(const CodeGenDAGPatterns &CDP) const {
   return &CDP.getIntrinsicInfo(IID);
 }
 
+/// getComplexPatternInfo - If this node corresponds to a ComplexPattern,
+/// return the ComplexPattern information, otherwise return null.
+const ComplexPattern *
+TreePatternNode::getComplexPatternInfo(const CodeGenDAGPatterns &CGP) const {
+  if (!isLeaf()) return 0;
+  
+  DefInit *DI = dynamic_cast<DefInit*>(getLeafValue());
+  if (DI && DI->getDef()->isSubClassOf("ComplexPattern"))
+    return &CGP.getComplexPattern(DI->getDef());
+  return 0;
+}
+
+/// NodeHasProperty - Return true if this node has the specified property.
+bool TreePatternNode::NodeHasProperty(SDNP Property,
+                                      const CodeGenDAGPatterns &CGP) const {
+  if (isLeaf()) {
+    if (const ComplexPattern *CP = getComplexPatternInfo(CGP))
+      return CP->hasProperty(Property);
+    return false;
+  }
+  
+  Record *Operator = getOperator();
+  if (!Operator->isSubClassOf("SDNode")) return false;
+  
+  return CGP.getSDNodeInfo(Operator).hasProperty(Property);
+}
+
+
+
+
+/// TreeHasProperty - Return true if any node in this tree has the specified
+/// property.
+bool TreePatternNode::TreeHasProperty(SDNP Property,
+                                      const CodeGenDAGPatterns &CGP) const {
+  if (NodeHasProperty(Property, CGP))
+    return true;
+  for (unsigned i = 0, e = getNumChildren(); i != e; ++i)
+    if (getChild(i)->TreeHasProperty(Property, CGP))
+      return true;
+  return false;
+}  
+
 /// isCommutativeIntrinsic - Return true if the node corresponds to a
 /// commutative intrinsic.
 bool
@@ -845,7 +896,9 @@ bool TreePatternNode::ApplyTypeConstraints(TreePattern &TP, bool NotRegisters) {
     if (DefInit *DI = dynamic_cast<DefInit*>(getLeafValue())) {
       // If it's a regclass or something else known, include the type.
       return UpdateNodeType(getImplicitType(DI->getDef(), NotRegisters, TP),TP);
-    } else if (IntInit *II = dynamic_cast<IntInit*>(getLeafValue())) {
+    }
+    
+    if (IntInit *II = dynamic_cast<IntInit*>(getLeafValue())) {
       // Int inits are always integers. :)
       bool MadeChange = UpdateNodeType(MVT::iAny, TP);
       
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
index c51232a..5eef9e1 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenDAGPatterns.h
@@ -217,7 +217,7 @@ public:
     Children[i] = N;
   }
 
-  const std::vector<std::string> &getPredicateFns() const { return PredicateFns; }
+  const std::vector<std::string> &getPredicateFns() const {return PredicateFns;}
   void clearPredicateFns() { PredicateFns.clear(); }
   void setPredicateFns(const std::vector<std::string> &Fns) {
     assert(PredicateFns.empty() && "Overwriting non-empty predicate list!");
@@ -237,6 +237,18 @@ public:
   /// CodeGenIntrinsic information for it, otherwise return a null pointer.
   const CodeGenIntrinsic *getIntrinsicInfo(const CodeGenDAGPatterns &CDP) const;
 
+  /// getComplexPatternInfo - If this node corresponds to a ComplexPattern,
+  /// return the ComplexPattern information, otherwise return null.
+  const ComplexPattern *
+  getComplexPatternInfo(const CodeGenDAGPatterns &CGP) const;
+
+  /// NodeHasProperty - Return true if this node has the specified property.
+  bool NodeHasProperty(SDNP Property, const CodeGenDAGPatterns &CGP) const;
+  
+  /// TreeHasProperty - Return true if any node in this tree has the specified
+  /// property.
+  bool TreeHasProperty(SDNP Property, const CodeGenDAGPatterns &CGP) const;
+  
   /// isCommutativeIntrinsic - Return true if the node is an intrinsic which is
   /// marked isCommutative.
   bool isCommutativeIntrinsic(const CodeGenDAGPatterns &CDP) const;
@@ -249,6 +261,9 @@ public:   // Higher level manipulation routines.
   /// clone - Return a new copy of this tree.
   ///
   TreePatternNode *clone() const;
+
+  /// RemoveAllTypes - Recursively strip all the types of this tree.
+  void RemoveAllTypes();
   
   /// isIsomorphicTo - Return true if this node is recursively isomorphic to
   /// the specified node.  For this comparison, all of the state of the node
@@ -298,6 +313,11 @@ public:   // Higher level manipulation routines.
   bool canPatternMatch(std::string &Reason, const CodeGenDAGPatterns &CDP);
 };
 
+inline raw_ostream &operator<<(raw_ostream &OS, const TreePatternNode &TPN) {
+  TPN.print(OS);
+  return OS;
+}
+  
 
 /// TreePattern - Represent a pattern, used for instructions, pattern
 /// fragments, etc.
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
index 684431a..d31502b 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.cpp
@@ -33,10 +33,10 @@ static void ParseConstraint(const std::string &CStr, CodeGenInstruction *I) {
       I->ParseOperandName(Name, false);
 
     // Build the string for the operand
-    std::string OpConstraint = "(1 << TOI::EARLY_CLOBBER)";
-    if (!I->OperandList[Op.first].Constraints[Op.second].empty())
+    if (!I->OperandList[Op.first].Constraints[Op.second].isNone())
       throw "Operand '" + Name + "' cannot have multiple constraints!";
-    I->OperandList[Op.first].Constraints[Op.second] = OpConstraint;
+    I->OperandList[Op.first].Constraints[Op.second] =
+      CodeGenInstruction::ConstraintInfo::getEarlyClobber();
     return;
   }
 
@@ -65,13 +65,11 @@ static void ParseConstraint(const std::string &CStr, CodeGenInstruction *I) {
 
 
   unsigned FlatOpNo = I->getFlattenedOperandNumber(SrcOp);
-  // Build the string for the operand.
-  std::string OpConstraint =
-  "((" + utostr(FlatOpNo) + " << 16) | (1 << TOI::TIED_TO))";
 
-  if (!I->OperandList[DestOp.first].Constraints[DestOp.second].empty())
+  if (!I->OperandList[DestOp.first].Constraints[DestOp.second].isNone())
     throw "Operand '" + DestOpName + "' cannot have multiple constraints!";
-  I->OperandList[DestOp.first].Constraints[DestOp.second] = OpConstraint;
+  I->OperandList[DestOp.first].Constraints[DestOp.second] =
+    CodeGenInstruction::ConstraintInfo::getTied(FlatOpNo);
 }
 
 static void ParseConstraints(const std::string &CStr, CodeGenInstruction *I) {
@@ -210,18 +208,13 @@ CodeGenInstruction::CodeGenInstruction(Record *R, const std::string &AsmStr)
   // For backward compatibility: isTwoAddress means operand 1 is tied to
   // operand 0.
   if (isTwoAddress) {
-    if (!OperandList[1].Constraints[0].empty())
+    if (!OperandList[1].Constraints[0].isNone())
       throw R->getName() + ": cannot use isTwoAddress property: instruction "
             "already has constraint set!";
-    OperandList[1].Constraints[0] = "((0 << 16) | (1 << TOI::TIED_TO))";
+    OperandList[1].Constraints[0] =
+      CodeGenInstruction::ConstraintInfo::getTied(0);
   }
 
-  // Any operands with unset constraints get 0 as their constraint.
-  for (unsigned op = 0, e = OperandList.size(); op != e; ++op)
-    for (unsigned j = 0, e = OperandList[op].MINumOperands; j != e; ++j)
-      if (OperandList[op].Constraints[j].empty())
-        OperandList[op].Constraints[j] = "0";
-
   // Parse the DisableEncoding field.
   std::string DisableEncoding = R->getValueAsString("DisableEncoding");
   while (1) {
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
index d22ac3e..285da14 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenInstruction.h
@@ -32,6 +32,36 @@ namespace llvm {
     /// instruction.
     std::string AsmString;
     
+    class ConstraintInfo {
+      enum { None, EarlyClobber, Tied } Kind;
+      unsigned OtherTiedOperand;
+    public:
+      ConstraintInfo() : Kind(None) {}
+
+      static ConstraintInfo getEarlyClobber() {
+        ConstraintInfo I;
+        I.Kind = EarlyClobber;
+        I.OtherTiedOperand = 0;
+        return I;
+      }
+      
+      static ConstraintInfo getTied(unsigned Op) {
+        ConstraintInfo I;
+        I.Kind = Tied;
+        I.OtherTiedOperand = Op;
+        return I;
+      }
+      
+      bool isNone() const { return Kind == None; }
+      bool isEarlyClobber() const { return Kind == EarlyClobber; }
+      bool isTied() const { return Kind == Tied; }
+      
+      unsigned getTiedOperand() const {
+        assert(isTied());
+        return OtherTiedOperand;
+      }
+    };
+    
     /// OperandInfo - The information we keep track of for each operand in the
     /// operand list for a tablegen instruction.
     struct OperandInfo {
@@ -67,7 +97,7 @@ namespace llvm {
       
       /// Constraint info for this operand.  This operand can have pieces, so we
       /// track constraint info for each.
-      std::vector<std::string> Constraints;
+      std::vector<ConstraintInfo> Constraints;
 
       OperandInfo(Record *R, const std::string &N, const std::string &PMN, 
                   unsigned MION, unsigned MINO, DagInit *MIOI)
diff --git a/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.cpp b/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.cpp
index c9af5f7..2688091 100644
--- a/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/CodeGenTarget.cpp
@@ -337,10 +337,10 @@ getInstructionsByEnumValue(std::vector<const CodeGenInstruction*>
     throw "Could not find 'COPY_TO_REGCLASS' instruction!";
   const CodeGenInstruction *COPY_TO_REGCLASS = &I->second;
 
-  I = getInstructions().find("DEBUG_VALUE");
+  I = getInstructions().find("DBG_VALUE");
   if (I == Instructions.end())
-    throw "Could not find 'DEBUG_VALUE' instruction!";
-  const CodeGenInstruction *DEBUG_VALUE = &I->second;
+    throw "Could not find 'DBG_VALUE' instruction!";
+  const CodeGenInstruction *DBG_VALUE = &I->second;
 
   // Print out the rest of the instructions now.
   NumberedInstructions.push_back(PHI);
@@ -354,7 +354,7 @@ getInstructionsByEnumValue(std::vector<const CodeGenInstruction*>
   NumberedInstructions.push_back(IMPLICIT_DEF);
   NumberedInstructions.push_back(SUBREG_TO_REG);
   NumberedInstructions.push_back(COPY_TO_REGCLASS);
-  NumberedInstructions.push_back(DEBUG_VALUE);
+  NumberedInstructions.push_back(DBG_VALUE);
   for (inst_iterator II = inst_begin(), E = inst_end(); II != E; ++II)
     if (&II->second != PHI &&
         &II->second != INLINEASM &&
@@ -367,7 +367,7 @@ getInstructionsByEnumValue(std::vector<const CodeGenInstruction*>
         &II->second != IMPLICIT_DEF &&
         &II->second != SUBREG_TO_REG &&
         &II->second != COPY_TO_REGCLASS &&
-        &II->second != DEBUG_VALUE)
+        &II->second != DBG_VALUE)
       NumberedInstructions.push_back(&II->second);
 }
 
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
index c97582b..0eb06bb 100644
--- a/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelEmitter.cpp
@@ -12,6 +12,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "DAGISelEmitter.h"
+#include "DAGISelMatcher.h"
 #include "Record.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/Support/CommandLine.h"
@@ -55,19 +56,6 @@ static bool NodeIsComplexPattern(TreePatternNode *N) {
           isSubClassOf("ComplexPattern"));
 }
 
-/// NodeGetComplexPattern - return the pointer to the ComplexPattern if N
-/// is a leaf node and a subclass of ComplexPattern, else it returns NULL.
-static const ComplexPattern *NodeGetComplexPattern(TreePatternNode *N,
-                                                   CodeGenDAGPatterns &CGP) {
-  if (N->isLeaf() &&
-      dynamic_cast<DefInit*>(N->getLeafValue()) &&
-      static_cast<DefInit*>(N->getLeafValue())->getDef()->
-      isSubClassOf("ComplexPattern")) {
-    return &CGP.getComplexPattern(static_cast<DefInit*>(N->getLeafValue())
-                                       ->getDef());
-  }
-  return NULL;
-}
 
 /// getPatternSize - Return the 'size' of this pattern.  We want to match large
 /// patterns before small ones.  This is used to determine the size of a
@@ -91,7 +79,7 @@ static unsigned getPatternSize(TreePatternNode *P, CodeGenDAGPatterns &CGP) {
   // Later we can allow complexity / cost for each pattern to be (optionally)
   // specified. To get best possible pattern match we'll need to dynamically
   // calculate the complexity of all patterns a dag can potentially map to.
-  const ComplexPattern *AM = NodeGetComplexPattern(P, CGP);
+  const ComplexPattern *AM = P->getComplexPatternInfo(CGP);
   if (AM)
     Size += AM->getNumOperands() * 3;
 
@@ -217,68 +205,10 @@ static MVT::SimpleValueType getRegisterValueType(Record *R, const CodeGenTarget
   return VT;
 }
 
-
-/// RemoveAllTypes - A quick recursive walk over a pattern which removes all
-/// type information from it.
-static void RemoveAllTypes(TreePatternNode *N) {
-  N->removeTypes();
-  if (!N->isLeaf())
-    for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i)
-      RemoveAllTypes(N->getChild(i));
-}
-
-/// NodeHasProperty - return true if TreePatternNode has the specified
-/// property.
-static bool NodeHasProperty(TreePatternNode *N, SDNP Property,
-                            CodeGenDAGPatterns &CGP) {
-  if (N->isLeaf()) {
-    const ComplexPattern *CP = NodeGetComplexPattern(N, CGP);
-    if (CP)
-      return CP->hasProperty(Property);
-    return false;
-  }
-  Record *Operator = N->getOperator();
-  if (!Operator->isSubClassOf("SDNode")) return false;
-
-  return CGP.getSDNodeInfo(Operator).hasProperty(Property);
-}
-
-static bool PatternHasProperty(TreePatternNode *N, SDNP Property,
-                               CodeGenDAGPatterns &CGP) {
-  if (NodeHasProperty(N, Property, CGP))
-    return true;
-
-  for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i) {
-    TreePatternNode *Child = N->getChild(i);
-    if (PatternHasProperty(Child, Property, CGP))
-      return true;
-  }
-
-  return false;
-}
-
 static std::string getOpcodeName(Record *Op, CodeGenDAGPatterns &CGP) {
   return CGP.getSDNodeInfo(Op).getEnumName();
 }
 
-static
-bool DisablePatternForFastISel(TreePatternNode *N, CodeGenDAGPatterns &CGP) {
-  bool isStore = !N->isLeaf() &&
-    getOpcodeName(N->getOperator(), CGP) == "ISD::STORE";
-  if (!isStore && NodeHasProperty(N, SDNPHasChain, CGP))
-    return false;
-
-  bool HasChain = false;
-  for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i) {
-    TreePatternNode *Child = N->getChild(i);
-    if (PatternHasProperty(Child, SDNPHasChain, CGP)) {
-      HasChain = true;
-      break;
-    }
-  }
-  return HasChain;
-}
-
 //===----------------------------------------------------------------------===//
 // Node Transformation emitter implementation.
 //
@@ -463,760 +393,873 @@ public:
   /// matches, and the SDNode for the result has the RootName specified name.
   void EmitMatchCode(TreePatternNode *N, TreePatternNode *P,
                      const std::string &RootName, const std::string &ChainSuffix,
-                     bool &FoundChain) {
-
-    // Save loads/stores matched by a pattern.
-    if (!N->isLeaf() && N->getName().empty()) {
-      if (NodeHasProperty(N, SDNPMemOperand, CGP))
-        LSI.push_back(getNodeName(RootName));
-    }
-
-    bool isRoot = (P == NULL);
-    // Emit instruction predicates. Each predicate is just a string for now.
-    if (isRoot) {
-      // Record input varargs info.
-      NumInputRootOps = N->getNumChildren();
+                     bool &FoundChain);
 
-      if (DisablePatternForFastISel(N, CGP))
-        emitCheck("OptLevel != CodeGenOpt::None");
+  void EmitChildMatchCode(TreePatternNode *Child, TreePatternNode *Parent,
+                          const std::string &RootName, 
+                          const std::string &ChainSuffix, bool &FoundChain);
 
-      emitCheck(PredicateCheck);
-    }
+  /// EmitResultCode - Emit the action for a pattern.  Now that it has matched
+  /// we actually have to build a DAG!
+  std::vector<std::string>
+  EmitResultCode(TreePatternNode *N, std::vector<Record*> DstRegs,
+                 bool InFlagDecled, bool ResNodeDecled,
+                 bool LikeLeaf = false, bool isRoot = false);
 
-    if (N->isLeaf()) {
-      if (IntInit *II = dynamic_cast<IntInit*>(N->getLeafValue())) {
-        emitCheck("cast<ConstantSDNode>(" + getNodeName(RootName) +
-                  ")->getSExtValue() == INT64_C(" +
-                  itostr(II->getValue()) + ")");
-        return;
-      } else if (!NodeIsComplexPattern(N)) {
-        assert(0 && "Cannot match this as a leaf value!");
-        abort();
-      }
+  /// InsertOneTypeCheck - Insert a type-check for an unresolved type in 'Pat'
+  /// and add it to the tree. 'Pat' and 'Other' are isomorphic trees except that 
+  /// 'Pat' may be missing types.  If we find an unresolved type to add a check
+  /// for, this returns true otherwise false if Pat has all types.
+  bool InsertOneTypeCheck(TreePatternNode *Pat, TreePatternNode *Other,
+                          const std::string &Prefix, bool isRoot = false) {
+    // Did we find one?
+    if (Pat->getExtTypes() != Other->getExtTypes()) {
+      // Move a type over from 'other' to 'pat'.
+      Pat->setTypes(Other->getExtTypes());
+      // The top level node type is checked outside of the select function.
+      if (!isRoot)
+        emitCheck(Prefix + ".getValueType() == " +
+                  getName(Pat->getTypeNum(0)));
+      return true;
     }
   
-    // If this node has a name associated with it, capture it in VariableMap. If
-    // we already saw this in the pattern, emit code to verify dagness.
-    if (!N->getName().empty()) {
-      std::string &VarMapEntry = VariableMap[N->getName()];
-      if (VarMapEntry.empty()) {
-        VarMapEntry = RootName;
-      } else {
-        // If we get here, this is a second reference to a specific name.  Since
-        // we already have checked that the first reference is valid, we don't
-        // have to recursively match it, just check that it's the same as the
-        // previously named thing.
-        emitCheck(VarMapEntry + " == " + RootName);
-        return;
-      }
-
-      if (!N->isLeaf())
-        OperatorMap[N->getName()] = N->getOperator();
-    }
-
+    unsigned OpNo = (unsigned)Pat->NodeHasProperty(SDNPHasChain, CGP);
+    for (unsigned i = 0, e = Pat->getNumChildren(); i != e; ++i, ++OpNo)
+      if (InsertOneTypeCheck(Pat->getChild(i), Other->getChild(i),
+                             Prefix + utostr(OpNo)))
+        return true;
+    return false;
+  }
 
-    // Emit code to load the child nodes and match their contents recursively.
-    unsigned OpNo = 0;
-    bool NodeHasChain = NodeHasProperty   (N, SDNPHasChain, CGP);
-    bool HasChain     = PatternHasProperty(N, SDNPHasChain, CGP);
-    bool EmittedUseCheck = false;
-    if (HasChain) {
-      if (NodeHasChain)
-        OpNo = 1;
-      if (!isRoot) {
-        // Multiple uses of actual result?
-        emitCheck(getValueName(RootName) + ".hasOneUse()");
-        EmittedUseCheck = true;
-        if (NodeHasChain) {
-          // If the immediate use can somehow reach this node through another
-          // path, then can't fold it either or it will create a cycle.
-          // e.g. In the following diagram, XX can reach ld through YY. If
-          // ld is folded into XX, then YY is both a predecessor and a successor
-          // of XX.
-          //
-          //         [ld]
-          //         ^  ^
-          //         |  |
-          //        /   \---
-          //      /        [YY]
-          //      |         ^
-          //     [XX]-------|
-          bool NeedCheck = P != Pattern;
-          if (!NeedCheck) {
-            const SDNodeInfo &PInfo = CGP.getSDNodeInfo(P->getOperator());
-            NeedCheck =
-              P->getOperator() == CGP.get_intrinsic_void_sdnode() ||
-              P->getOperator() == CGP.get_intrinsic_w_chain_sdnode() ||
-              P->getOperator() == CGP.get_intrinsic_wo_chain_sdnode() ||
-              PInfo.getNumOperands() > 1 ||
-              PInfo.hasProperty(SDNPHasChain) ||
-              PInfo.hasProperty(SDNPInFlag) ||
-              PInfo.hasProperty(SDNPOptInFlag);
+private:
+  /// EmitInFlagSelectCode - Emit the flag operands for the DAG that is
+  /// being built.
+  void EmitInFlagSelectCode(TreePatternNode *N, const std::string &RootName,
+                            bool &ChainEmitted, bool &InFlagDecled,
+                            bool &ResNodeDecled, bool isRoot = false) {
+    const CodeGenTarget &T = CGP.getTargetInfo();
+    unsigned OpNo = (unsigned)N->NodeHasProperty(SDNPHasChain, CGP);
+    bool HasInFlag = N->NodeHasProperty(SDNPInFlag, CGP);
+    for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i, ++OpNo) {
+      TreePatternNode *Child = N->getChild(i);
+      if (!Child->isLeaf()) {
+        EmitInFlagSelectCode(Child, RootName + utostr(OpNo), ChainEmitted,
+                             InFlagDecled, ResNodeDecled);
+      } else {
+        if (DefInit *DI = dynamic_cast<DefInit*>(Child->getLeafValue())) {
+          if (!Child->getName().empty()) {
+            std::string Name = RootName + utostr(OpNo);
+            if (Duplicates.find(Name) != Duplicates.end())
+              // A duplicate! Do not emit a copy for this node.
+              continue;
           }
 
-          if (NeedCheck) {
-            std::string ParentName(RootName.begin(), RootName.end()-1);
-            emitCheck("IsLegalAndProfitableToFold(" + getNodeName(RootName) +
-                      ", " + getNodeName(ParentName) + ", N)");
+          Record *RR = DI->getDef();
+          if (RR->isSubClassOf("Register")) {
+            MVT::SimpleValueType RVT = getRegisterValueType(RR, T);
+            if (RVT == MVT::Flag) {
+              if (!InFlagDecled) {
+                emitCode("SDValue InFlag = " +
+                         getValueName(RootName + utostr(OpNo)) + ";");
+                InFlagDecled = true;
+              } else
+                emitCode("InFlag = " +
+                         getValueName(RootName + utostr(OpNo)) + ";");
+            } else {
+              if (!ChainEmitted) {
+                emitCode("SDValue Chain = CurDAG->getEntryNode();");
+                ChainName = "Chain";
+                ChainEmitted = true;
+              }
+              if (!InFlagDecled) {
+                emitCode("SDValue InFlag(0, 0);");
+                InFlagDecled = true;
+              }
+              std::string Decl = (!ResNodeDecled) ? "SDNode *" : "";
+              emitCode(Decl + "ResNode = CurDAG->getCopyToReg(" + ChainName +
+                       ", " + getNodeName(RootName) + "->getDebugLoc()" +
+                       ", " + getQualifiedName(RR) +
+                       ", " +  getValueName(RootName + utostr(OpNo)) +
+                       ", InFlag).getNode();");
+              ResNodeDecled = true;
+              emitCode(ChainName + " = SDValue(ResNode, 0);");
+              emitCode("InFlag = SDValue(ResNode, 1);");
+            }
           }
         }
       }
-
-      if (NodeHasChain) {
-        if (FoundChain) {
-          emitCheck("(" + ChainName + ".getNode() == " +
-                    getNodeName(RootName) + " || "
-                    "IsChainCompatible(" + ChainName + ".getNode(), " +
-                    getNodeName(RootName) + "))");
-          OrigChains.push_back(std::make_pair(ChainName,
-                                              getValueName(RootName)));
-        } else
-          FoundChain = true;
-        ChainName = "Chain" + ChainSuffix;
-        emitInit("SDValue " + ChainName + " = " + getNodeName(RootName) +
-                 "->getOperand(0);");
-      }
     }
 
-    // Don't fold any node which reads or writes a flag and has multiple uses.
-    // FIXME: We really need to separate the concepts of flag and "glue". Those
-    // real flag results, e.g. X86CMP output, can have multiple uses.
-    // FIXME: If the optional incoming flag does not exist. Then it is ok to
-    // fold it.
-    if (!isRoot &&
-        (PatternHasProperty(N, SDNPInFlag, CGP) ||
-         PatternHasProperty(N, SDNPOptInFlag, CGP) ||
-         PatternHasProperty(N, SDNPOutFlag, CGP))) {
-      if (!EmittedUseCheck) {
-        // Multiple uses of actual result?
-        emitCheck(getValueName(RootName) + ".hasOneUse()");
-      }
+    if (HasInFlag) {
+      if (!InFlagDecled) {
+        emitCode("SDValue InFlag = " + getNodeName(RootName) +
+               "->getOperand(" + utostr(OpNo) + ");");
+        InFlagDecled = true;
+      } else
+        emitCode("InFlag = " + getNodeName(RootName) +
+               "->getOperand(" + utostr(OpNo) + ");");
     }
+  }
+};
+
 
-    // If there are node predicates for this, emit the calls.
-    for (unsigned i = 0, e = N->getPredicateFns().size(); i != e; ++i)
-      emitCheck(N->getPredicateFns()[i] + "(" + getNodeName(RootName) + ")");
-
-    // If this is an 'and R, 1234' where the operation is AND/OR and the RHS is
-    // a constant without a predicate fn that has more that one bit set, handle
-    // this as a special case.  This is usually for targets that have special
-    // handling of certain large constants (e.g. alpha with it's 8/16/32-bit
-    // handling stuff).  Using these instructions is often far more efficient
-    // than materializing the constant.  Unfortunately, both the instcombiner
-    // and the dag combiner can often infer that bits are dead, and thus drop
-    // them from the mask in the dag.  For example, it might turn 'AND X, 255'
-    // into 'AND X, 254' if it knows the low bit is set.  Emit code that checks
-    // to handle this.
-    if (!N->isLeaf() && 
-        (N->getOperator()->getName() == "and" || 
-         N->getOperator()->getName() == "or") &&
-        N->getChild(1)->isLeaf() &&
-        N->getChild(1)->getPredicateFns().empty()) {
-      if (IntInit *II = dynamic_cast<IntInit*>(N->getChild(1)->getLeafValue())) {
-        if (!isPowerOf2_32(II->getValue())) {  // Don't bother with single bits.
-          emitInit("SDValue " + RootName + "0" + " = " +
-                   getNodeName(RootName) + "->getOperand(" + utostr(0) + ");");
-          emitInit("SDValue " + RootName + "1" + " = " +
-                   getNodeName(RootName) + "->getOperand(" + utostr(1) + ");");
-
-          unsigned NTmp = TmpNo++;
-          emitCode("ConstantSDNode *Tmp" + utostr(NTmp) +
-                   " = dyn_cast<ConstantSDNode>(" +
-                   getNodeName(RootName + "1") + ");");
-          emitCheck("Tmp" + utostr(NTmp));
-          const char *MaskPredicate = N->getOperator()->getName() == "or"
-            ? "CheckOrMask(" : "CheckAndMask(";
-          emitCheck(MaskPredicate + getValueName(RootName + "0") +
-                    ", Tmp" + utostr(NTmp) +
-                    ", INT64_C(" + itostr(II->getValue()) + "))");
-          
-          EmitChildMatchCode(N->getChild(0), N, RootName + utostr(0),
-                             ChainSuffix + utostr(0), FoundChain);
-          return;
+/// EmitMatchCode - Emit a matcher for N, going to the label for PatternNo
+/// if the match fails. At this point, we already know that the opcode for N
+/// matches, and the SDNode for the result has the RootName specified name.
+void PatternCodeEmitter::EmitMatchCode(TreePatternNode *N, TreePatternNode *P,
+                                       const std::string &RootName,
+                                       const std::string &ChainSuffix,
+                                       bool &FoundChain) {
+  
+  // Save loads/stores matched by a pattern.
+  if (!N->isLeaf() && N->getName().empty()) {
+    if (N->NodeHasProperty(SDNPMemOperand, CGP))
+      LSI.push_back(getNodeName(RootName));
+  }
+  
+  bool isRoot = (P == NULL);
+  // Emit instruction predicates. Each predicate is just a string for now.
+  if (isRoot) {
+    // Record input varargs info.
+    NumInputRootOps = N->getNumChildren();
+    emitCheck(PredicateCheck);
+  }
+  
+  if (N->isLeaf()) {
+    if (IntInit *II = dynamic_cast<IntInit*>(N->getLeafValue())) {
+      emitCheck("cast<ConstantSDNode>(" + getNodeName(RootName) +
+                ")->getSExtValue() == INT64_C(" +
+                itostr(II->getValue()) + ")");
+      return;
+    } else if (!NodeIsComplexPattern(N)) {
+      assert(0 && "Cannot match this as a leaf value!");
+      abort();
+    }
+  }
+  
+  // If this node has a name associated with it, capture it in VariableMap. If
+  // we already saw this in the pattern, emit code to verify dagness.
+  if (!N->getName().empty()) {
+    std::string &VarMapEntry = VariableMap[N->getName()];
+    if (VarMapEntry.empty()) {
+      VarMapEntry = RootName;
+    } else {
+      // If we get here, this is a second reference to a specific name.  Since
+      // we already have checked that the first reference is valid, we don't
+      // have to recursively match it, just check that it's the same as the
+      // previously named thing.
+      emitCheck(VarMapEntry + " == " + RootName);
+      return;
+    }
+    
+    if (!N->isLeaf())
+      OperatorMap[N->getName()] = N->getOperator();
+  }
+  
+  
+  // Emit code to load the child nodes and match their contents recursively.
+  unsigned OpNo = 0;
+  bool NodeHasChain = N->NodeHasProperty(SDNPHasChain, CGP);
+  bool HasChain     = N->TreeHasProperty(SDNPHasChain, CGP);
+  bool EmittedUseCheck = false;
+  if (HasChain) {
+    if (NodeHasChain)
+      OpNo = 1;
+    if (!isRoot) {
+      // Multiple uses of actual result?
+      emitCheck(getValueName(RootName) + ".hasOneUse()");
+      EmittedUseCheck = true;
+      if (NodeHasChain) {
+        // If the immediate use can somehow reach this node through another
+        // path, then can't fold it either or it will create a cycle.
+        // e.g. In the following diagram, XX can reach ld through YY. If
+        // ld is folded into XX, then YY is both a predecessor and a successor
+        // of XX.
+        //
+        //         [ld]
+        //         ^  ^
+        //         |  |
+        //        /   \---
+        //      /        [YY]
+        //      |         ^
+        //     [XX]-------|
+        bool NeedCheck = P != Pattern;
+        if (!NeedCheck) {
+          const SDNodeInfo &PInfo = CGP.getSDNodeInfo(P->getOperator());
+          NeedCheck =
+          P->getOperator() == CGP.get_intrinsic_void_sdnode() ||
+          P->getOperator() == CGP.get_intrinsic_w_chain_sdnode() ||
+          P->getOperator() == CGP.get_intrinsic_wo_chain_sdnode() ||
+          PInfo.getNumOperands() > 1 ||
+          PInfo.hasProperty(SDNPHasChain) ||
+          PInfo.hasProperty(SDNPInFlag) ||
+          PInfo.hasProperty(SDNPOptInFlag);
+        }
+        
+        if (NeedCheck) {
+          std::string ParentName(RootName.begin(), RootName.end()-1);
+          emitCheck("IsLegalAndProfitableToFold(" + getNodeName(RootName) +
+                    ", " + getNodeName(ParentName) + ", N)");
         }
       }
     }
     
-    for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i, ++OpNo) {
-      emitInit("SDValue " + getValueName(RootName + utostr(OpNo)) + " = " +
-               getNodeName(RootName) + "->getOperand(" + utostr(OpNo) + ");");
-
-      EmitChildMatchCode(N->getChild(i), N, RootName + utostr(OpNo),
-                         ChainSuffix + utostr(OpNo), FoundChain);
+    if (NodeHasChain) {
+      if (FoundChain) {
+        emitCheck("(" + ChainName + ".getNode() == " +
+                  getNodeName(RootName) + " || "
+                  "IsChainCompatible(" + ChainName + ".getNode(), " +
+                  getNodeName(RootName) + "))");
+        OrigChains.push_back(std::make_pair(ChainName,
+                                            getValueName(RootName)));
+      } else
+        FoundChain = true;
+      ChainName = "Chain" + ChainSuffix;
+      emitInit("SDValue " + ChainName + " = " + getNodeName(RootName) +
+               "->getOperand(0);");
     }
-
-    // Handle cases when root is a complex pattern.
-    const ComplexPattern *CP;
-    if (isRoot && N->isLeaf() && (CP = NodeGetComplexPattern(N, CGP))) {
-      std::string Fn = CP->getSelectFunc();
-      unsigned NumOps = CP->getNumOperands();
-      for (unsigned i = 0; i < NumOps; ++i) {
-        emitDecl("CPTmp" + RootName + "_" + utostr(i));
-        emitCode("SDValue CPTmp" + RootName + "_" + utostr(i) + ";");
-      }
-      if (CP->hasProperty(SDNPHasChain)) {
-        emitDecl("CPInChain");
-        emitDecl("Chain" + ChainSuffix);
-        emitCode("SDValue CPInChain;");
-        emitCode("SDValue Chain" + ChainSuffix + ";");
+  }
+  
+  // Don't fold any node which reads or writes a flag and has multiple uses.
+  // FIXME: We really need to separate the concepts of flag and "glue". Those
+  // real flag results, e.g. X86CMP output, can have multiple uses.
+  // FIXME: If the optional incoming flag does not exist. Then it is ok to
+  // fold it.
+  if (!isRoot &&
+      (N->TreeHasProperty(SDNPInFlag, CGP) ||
+       N->TreeHasProperty(SDNPOptInFlag, CGP) ||
+       N->TreeHasProperty(SDNPOutFlag, CGP))) {
+        if (!EmittedUseCheck) {
+          // Multiple uses of actual result?
+          emitCheck(getValueName(RootName) + ".hasOneUse()");
+        }
       }
-
-      std::string Code = Fn + "(" +
-                         getNodeName(RootName) + ", " +
-                         getValueName(RootName);
-      for (unsigned i = 0; i < NumOps; i++)
-        Code += ", CPTmp" + RootName + "_" + utostr(i);
-      if (CP->hasProperty(SDNPHasChain)) {
-        ChainName = "Chain" + ChainSuffix;
-        Code += ", CPInChain, Chain" + ChainSuffix;
+  
+  // If there are node predicates for this, emit the calls.
+  for (unsigned i = 0, e = N->getPredicateFns().size(); i != e; ++i)
+    emitCheck(N->getPredicateFns()[i] + "(" + getNodeName(RootName) + ")");
+  
+  // If this is an 'and R, 1234' where the operation is AND/OR and the RHS is
+  // a constant without a predicate fn that has more that one bit set, handle
+  // this as a special case.  This is usually for targets that have special
+  // handling of certain large constants (e.g. alpha with it's 8/16/32-bit
+  // handling stuff).  Using these instructions is often far more efficient
+  // than materializing the constant.  Unfortunately, both the instcombiner
+  // and the dag combiner can often infer that bits are dead, and thus drop
+  // them from the mask in the dag.  For example, it might turn 'AND X, 255'
+  // into 'AND X, 254' if it knows the low bit is set.  Emit code that checks
+  // to handle this.
+  if (!N->isLeaf() && 
+      (N->getOperator()->getName() == "and" || 
+       N->getOperator()->getName() == "or") &&
+      N->getChild(1)->isLeaf() &&
+      N->getChild(1)->getPredicateFns().empty()) {
+    if (IntInit *II = dynamic_cast<IntInit*>(N->getChild(1)->getLeafValue())) {
+      if (!isPowerOf2_32(II->getValue())) {  // Don't bother with single bits.
+        emitInit("SDValue " + RootName + "0" + " = " +
+                 getNodeName(RootName) + "->getOperand(" + utostr(0) + ");");
+        emitInit("SDValue " + RootName + "1" + " = " +
+                 getNodeName(RootName) + "->getOperand(" + utostr(1) + ");");
+        
+        unsigned NTmp = TmpNo++;
+        emitCode("ConstantSDNode *Tmp" + utostr(NTmp) +
+                 " = dyn_cast<ConstantSDNode>(" +
+                 getNodeName(RootName + "1") + ");");
+        emitCheck("Tmp" + utostr(NTmp));
+        const char *MaskPredicate = N->getOperator()->getName() == "or"
+        ? "CheckOrMask(" : "CheckAndMask(";
+        emitCheck(MaskPredicate + getValueName(RootName + "0") +
+                  ", Tmp" + utostr(NTmp) +
+                  ", INT64_C(" + itostr(II->getValue()) + "))");
+        
+        EmitChildMatchCode(N->getChild(0), N, RootName + utostr(0),
+                           ChainSuffix + utostr(0), FoundChain);
+        return;
       }
-      emitCheck(Code + ")");
     }
   }
+  
+  for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i, ++OpNo) {
+    emitInit("SDValue " + getValueName(RootName + utostr(OpNo)) + " = " +
+             getNodeName(RootName) + "->getOperand(" + utostr(OpNo) + ");");
+    
+    EmitChildMatchCode(N->getChild(i), N, RootName + utostr(OpNo),
+                       ChainSuffix + utostr(OpNo), FoundChain);
+  }
+  
+  // Handle cases when root is a complex pattern.
+  const ComplexPattern *CP;
+  if (isRoot && N->isLeaf() && (CP = N->getComplexPatternInfo(CGP))) {
+    std::string Fn = CP->getSelectFunc();
+    unsigned NumOps = CP->getNumOperands();
+    for (unsigned i = 0; i < NumOps; ++i) {
+      emitDecl("CPTmp" + RootName + "_" + utostr(i));
+      emitCode("SDValue CPTmp" + RootName + "_" + utostr(i) + ";");
+    }
+    if (CP->hasProperty(SDNPHasChain)) {
+      emitDecl("CPInChain");
+      emitDecl("Chain" + ChainSuffix);
+      emitCode("SDValue CPInChain;");
+      emitCode("SDValue Chain" + ChainSuffix + ";");
+    }
+    
+    std::string Code = Fn + "(" +
+    getNodeName(RootName) + ", " +
+    getValueName(RootName);
+    for (unsigned i = 0; i < NumOps; i++)
+      Code += ", CPTmp" + RootName + "_" + utostr(i);
+    if (CP->hasProperty(SDNPHasChain)) {
+      ChainName = "Chain" + ChainSuffix;
+      Code += ", CPInChain, Chain" + ChainSuffix;
+    }
+    emitCheck(Code + ")");
+  }
+}
 
-  void EmitChildMatchCode(TreePatternNode *Child, TreePatternNode *Parent,
-                          const std::string &RootName, 
-                          const std::string &ChainSuffix, bool &FoundChain) {
-    if (!Child->isLeaf()) {
-      // If it's not a leaf, recursively match.
-      const SDNodeInfo &CInfo = CGP.getSDNodeInfo(Child->getOperator());
-      emitCheck(getNodeName(RootName) + "->getOpcode() == " +
-                CInfo.getEnumName());
-      EmitMatchCode(Child, Parent, RootName, ChainSuffix, FoundChain);
-      bool HasChain = false;
-      if (NodeHasProperty(Child, SDNPHasChain, CGP)) {
-        HasChain = true;
-        FoldedChains.push_back(std::make_pair(getValueName(RootName),
-                                              CInfo.getNumResults()));
-      }
-      if (NodeHasProperty(Child, SDNPOutFlag, CGP)) {
-        assert(FoldedFlag.first == "" && FoldedFlag.second == 0 &&
-               "Pattern folded multiple nodes which produce flags?");
-        FoldedFlag = std::make_pair(getValueName(RootName),
-                                    CInfo.getNumResults() + (unsigned)HasChain);
+void PatternCodeEmitter::EmitChildMatchCode(TreePatternNode *Child,
+                                            TreePatternNode *Parent,
+                                            const std::string &RootName, 
+                                            const std::string &ChainSuffix,
+                                            bool &FoundChain) {
+  if (!Child->isLeaf()) {
+    // If it's not a leaf, recursively match.
+    const SDNodeInfo &CInfo = CGP.getSDNodeInfo(Child->getOperator());
+    emitCheck(getNodeName(RootName) + "->getOpcode() == " +
+              CInfo.getEnumName());
+    EmitMatchCode(Child, Parent, RootName, ChainSuffix, FoundChain);
+    bool HasChain = false;
+    if (Child->NodeHasProperty(SDNPHasChain, CGP)) {
+      HasChain = true;
+      FoldedChains.push_back(std::make_pair(getValueName(RootName),
+                                            CInfo.getNumResults()));
+    }
+    if (Child->NodeHasProperty(SDNPOutFlag, CGP)) {
+      assert(FoldedFlag.first == "" && FoldedFlag.second == 0 &&
+             "Pattern folded multiple nodes which produce flags?");
+      FoldedFlag = std::make_pair(getValueName(RootName),
+                                  CInfo.getNumResults() + (unsigned)HasChain);
+    }
+  } else {
+    // If this child has a name associated with it, capture it in VarMap. If
+    // we already saw this in the pattern, emit code to verify dagness.
+    if (!Child->getName().empty()) {
+      std::string &VarMapEntry = VariableMap[Child->getName()];
+      if (VarMapEntry.empty()) {
+        VarMapEntry = getValueName(RootName);
+      } else {
+        // If we get here, this is a second reference to a specific name.
+        // Since we already have checked that the first reference is valid,
+        // we don't have to recursively match it, just check that it's the
+        // same as the previously named thing.
+        emitCheck(VarMapEntry + " == " + getValueName(RootName));
+        Duplicates.insert(getValueName(RootName));
+        return;
       }
-    } else {
-      // If this child has a name associated with it, capture it in VarMap. If
-      // we already saw this in the pattern, emit code to verify dagness.
-      if (!Child->getName().empty()) {
-        std::string &VarMapEntry = VariableMap[Child->getName()];
-        if (VarMapEntry.empty()) {
-          VarMapEntry = getValueName(RootName);
-        } else {
-          // If we get here, this is a second reference to a specific name.
-          // Since we already have checked that the first reference is valid,
-          // we don't have to recursively match it, just check that it's the
-          // same as the previously named thing.
-          emitCheck(VarMapEntry + " == " + getValueName(RootName));
-          Duplicates.insert(getValueName(RootName));
-          return;
+    }
+    
+    // Handle leaves of various types.
+    if (DefInit *DI = dynamic_cast<DefInit*>(Child->getLeafValue())) {
+      Record *LeafRec = DI->getDef();
+      if (LeafRec->isSubClassOf("RegisterClass") || 
+          LeafRec->isSubClassOf("PointerLikeRegClass")) {
+        // Handle register references.  Nothing to do here.
+      } else if (LeafRec->isSubClassOf("Register")) {
+        // Handle register references.
+      } else if (LeafRec->isSubClassOf("ComplexPattern")) {
+        // Handle complex pattern.
+        const ComplexPattern *CP = Child->getComplexPatternInfo(CGP);
+        std::string Fn = CP->getSelectFunc();
+        unsigned NumOps = CP->getNumOperands();
+        for (unsigned i = 0; i < NumOps; ++i) {
+          emitDecl("CPTmp" + RootName + "_" + utostr(i));
+          emitCode("SDValue CPTmp" + RootName + "_" + utostr(i) + ";");
         }
-      }
-      
-      // Handle leaves of various types.
-      if (DefInit *DI = dynamic_cast<DefInit*>(Child->getLeafValue())) {
-        Record *LeafRec = DI->getDef();
-        if (LeafRec->isSubClassOf("RegisterClass") || 
-            LeafRec->isSubClassOf("PointerLikeRegClass")) {
-          // Handle register references.  Nothing to do here.
-        } else if (LeafRec->isSubClassOf("Register")) {
-          // Handle register references.
-        } else if (LeafRec->isSubClassOf("ComplexPattern")) {
-          // Handle complex pattern.
-          const ComplexPattern *CP = NodeGetComplexPattern(Child, CGP);
-          std::string Fn = CP->getSelectFunc();
-          unsigned NumOps = CP->getNumOperands();
-          for (unsigned i = 0; i < NumOps; ++i) {
-            emitDecl("CPTmp" + RootName + "_" + utostr(i));
-            emitCode("SDValue CPTmp" + RootName + "_" + utostr(i) + ";");
-          }
-          if (CP->hasProperty(SDNPHasChain)) {
-            const SDNodeInfo &PInfo = CGP.getSDNodeInfo(Parent->getOperator());
-            FoldedChains.push_back(std::make_pair("CPInChain",
-                                                  PInfo.getNumResults()));
-            ChainName = "Chain" + ChainSuffix;
-            emitDecl("CPInChain");
-            emitDecl(ChainName);
-            emitCode("SDValue CPInChain;");
-            emitCode("SDValue " + ChainName + ";");
-          }
-          
-          std::string Code = Fn + "(N, ";
-          if (CP->hasProperty(SDNPHasChain)) {
-            std::string ParentName(RootName.begin(), RootName.end()-1);
-            Code += getValueName(ParentName) + ", ";
-          }
-          Code += getValueName(RootName);
-          for (unsigned i = 0; i < NumOps; i++)
-            Code += ", CPTmp" + RootName + "_" + utostr(i);
-          if (CP->hasProperty(SDNPHasChain))
-            Code += ", CPInChain, Chain" + ChainSuffix;
-          emitCheck(Code + ")");
-        } else if (LeafRec->getName() == "srcvalue") {
-          // Place holder for SRCVALUE nodes. Nothing to do here.
-        } else if (LeafRec->isSubClassOf("ValueType")) {
-          // Make sure this is the specified value type.
-          emitCheck("cast<VTSDNode>(" + getNodeName(RootName) +
-                    ")->getVT() == MVT::" + LeafRec->getName());
-        } else if (LeafRec->isSubClassOf("CondCode")) {
-          // Make sure this is the specified cond code.
-          emitCheck("cast<CondCodeSDNode>(" + getNodeName(RootName) +
-                    ")->get() == ISD::" + LeafRec->getName());
-        } else {
-#ifndef NDEBUG
-          Child->dump();
-          errs() << " ";
-#endif
-          assert(0 && "Unknown leaf type!");
+        if (CP->hasProperty(SDNPHasChain)) {
+          const SDNodeInfo &PInfo = CGP.getSDNodeInfo(Parent->getOperator());
+          FoldedChains.push_back(std::make_pair("CPInChain",
+                                                PInfo.getNumResults()));
+          ChainName = "Chain" + ChainSuffix;
+          emitDecl("CPInChain");
+          emitDecl(ChainName);
+          emitCode("SDValue CPInChain;");
+          emitCode("SDValue " + ChainName + ";");
         }
         
-        // If there are node predicates for this, emit the calls.
-        for (unsigned i = 0, e = Child->getPredicateFns().size(); i != e; ++i)
-          emitCheck(Child->getPredicateFns()[i] + "(" + getNodeName(RootName) +
-                    ")");
-      } else if (IntInit *II =
-                 dynamic_cast<IntInit*>(Child->getLeafValue())) {
-        unsigned NTmp = TmpNo++;
-        emitCode("ConstantSDNode *Tmp"+ utostr(NTmp) +
-                 " = dyn_cast<ConstantSDNode>("+
-                 getNodeName(RootName) + ");");
-        emitCheck("Tmp" + utostr(NTmp));
-        unsigned CTmp = TmpNo++;
-        emitCode("int64_t CN"+ utostr(CTmp) +
-                 " = Tmp" + utostr(NTmp) + "->getSExtValue();");
-        emitCheck("CN" + utostr(CTmp) + " == "
-                  "INT64_C(" +itostr(II->getValue()) + ")");
+        std::string Code = Fn + "(N, ";
+        if (CP->hasProperty(SDNPHasChain)) {
+          std::string ParentName(RootName.begin(), RootName.end()-1);
+          Code += getValueName(ParentName) + ", ";
+        }
+        Code += getValueName(RootName);
+        for (unsigned i = 0; i < NumOps; i++)
+          Code += ", CPTmp" + RootName + "_" + utostr(i);
+        if (CP->hasProperty(SDNPHasChain))
+          Code += ", CPInChain, Chain" + ChainSuffix;
+        emitCheck(Code + ")");
+      } else if (LeafRec->getName() == "srcvalue") {
+        // Place holder for SRCVALUE nodes. Nothing to do here.
+      } else if (LeafRec->isSubClassOf("ValueType")) {
+        // Make sure this is the specified value type.
+        emitCheck("cast<VTSDNode>(" + getNodeName(RootName) +
+                  ")->getVT() == MVT::" + LeafRec->getName());
+      } else if (LeafRec->isSubClassOf("CondCode")) {
+        // Make sure this is the specified cond code.
+        emitCheck("cast<CondCodeSDNode>(" + getNodeName(RootName) +
+                  ")->get() == ISD::" + LeafRec->getName());
       } else {
 #ifndef NDEBUG
         Child->dump();
+        errs() << " ";
 #endif
         assert(0 && "Unknown leaf type!");
       }
+      
+      // If there are node predicates for this, emit the calls.
+      for (unsigned i = 0, e = Child->getPredicateFns().size(); i != e; ++i)
+        emitCheck(Child->getPredicateFns()[i] + "(" + getNodeName(RootName) +
+                  ")");
+    } else if (IntInit *II =
+               dynamic_cast<IntInit*>(Child->getLeafValue())) {
+      unsigned NTmp = TmpNo++;
+      emitCode("ConstantSDNode *Tmp"+ utostr(NTmp) +
+               " = dyn_cast<ConstantSDNode>("+
+               getNodeName(RootName) + ");");
+      emitCheck("Tmp" + utostr(NTmp));
+      unsigned CTmp = TmpNo++;
+      emitCode("int64_t CN"+ utostr(CTmp) +
+               " = Tmp" + utostr(NTmp) + "->getSExtValue();");
+      emitCheck("CN" + utostr(CTmp) + " == "
+                "INT64_C(" +itostr(II->getValue()) + ")");
+    } else {
+#ifndef NDEBUG
+      Child->dump();
+#endif
+      assert(0 && "Unknown leaf type!");
     }
   }
+}
 
-  /// EmitResultCode - Emit the action for a pattern.  Now that it has matched
-  /// we actually have to build a DAG!
-  std::vector<std::string>
-  EmitResultCode(TreePatternNode *N, std::vector<Record*> DstRegs,
-                 bool InFlagDecled, bool ResNodeDecled,
-                 bool LikeLeaf = false, bool isRoot = false) {
-    // List of arguments of getMachineNode() or SelectNodeTo().
-    std::vector<std::string> NodeOps;
-    // This is something selected from the pattern we matched.
-    if (!N->getName().empty()) {
-      const std::string &VarName = N->getName();
-      std::string Val = VariableMap[VarName];
-      bool ModifiedVal = false;
-      if (Val.empty()) {
-        errs() << "Variable '" << VarName << " referenced but not defined "
-             << "and not caught earlier!\n";
-        abort();
-      }
-      if (Val[0] == 'T' && Val[1] == 'm' && Val[2] == 'p') {
-        // Already selected this operand, just return the tmpval.
-        NodeOps.push_back(getValueName(Val));
-        return NodeOps;
-      }
-
-      const ComplexPattern *CP;
-      unsigned ResNo = TmpNo++;
-      if (!N->isLeaf() && N->getOperator()->getName() == "imm") {
-        assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
-        std::string CastType;
-        std::string TmpVar =  "Tmp" + utostr(ResNo);
-        switch (N->getTypeNum(0)) {
+/// EmitResultCode - Emit the action for a pattern.  Now that it has matched
+/// we actually have to build a DAG!
+std::vector<std::string>
+PatternCodeEmitter::EmitResultCode(TreePatternNode *N, 
+                                   std::vector<Record*> DstRegs,
+                                   bool InFlagDecled, bool ResNodeDecled,
+                                   bool LikeLeaf, bool isRoot) {
+  // List of arguments of getMachineNode() or SelectNodeTo().
+  std::vector<std::string> NodeOps;
+  // This is something selected from the pattern we matched.
+  if (!N->getName().empty()) {
+    const std::string &VarName = N->getName();
+    std::string Val = VariableMap[VarName];
+    bool ModifiedVal = false;
+    if (Val.empty()) {
+      errs() << "Variable '" << VarName << " referenced but not defined "
+      << "and not caught earlier!\n";
+      abort();
+    }
+    if (Val[0] == 'T' && Val[1] == 'm' && Val[2] == 'p') {
+      // Already selected this operand, just return the tmpval.
+      NodeOps.push_back(getValueName(Val));
+      return NodeOps;
+    }
+    
+    const ComplexPattern *CP;
+    unsigned ResNo = TmpNo++;
+    if (!N->isLeaf() && N->getOperator()->getName() == "imm") {
+      assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
+      std::string CastType;
+      std::string TmpVar =  "Tmp" + utostr(ResNo);
+      switch (N->getTypeNum(0)) {
         default:
           errs() << "Cannot handle " << getEnumName(N->getTypeNum(0))
-               << " type as an immediate constant. Aborting\n";
+          << " type as an immediate constant. Aborting\n";
           abort();
         case MVT::i1:  CastType = "bool"; break;
         case MVT::i8:  CastType = "unsigned char"; break;
         case MVT::i16: CastType = "unsigned short"; break;
         case MVT::i32: CastType = "unsigned"; break;
         case MVT::i64: CastType = "uint64_t"; break;
-        }
-        emitCode("SDValue " + TmpVar + 
-                 " = CurDAG->getTargetConstant(((" + CastType +
-                 ") cast<ConstantSDNode>(" + Val + ")->getZExtValue()), " +
+      }
+      emitCode("SDValue " + TmpVar + 
+               " = CurDAG->getTargetConstant(((" + CastType +
+               ") cast<ConstantSDNode>(" + Val + ")->getZExtValue()), " +
+               getEnumName(N->getTypeNum(0)) + ");");
+      // Add Tmp<ResNo> to VariableMap, so that we don't multiply select this
+      // value if used multiple times by this pattern result.
+      Val = TmpVar;
+      ModifiedVal = true;
+      NodeOps.push_back(getValueName(Val));
+    } else if (!N->isLeaf() && N->getOperator()->getName() == "fpimm") {
+      assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
+      std::string TmpVar =  "Tmp" + utostr(ResNo);
+      emitCode("SDValue " + TmpVar + 
+               " = CurDAG->getTargetConstantFP(*cast<ConstantFPSDNode>(" + 
+               Val + ")->getConstantFPValue(), cast<ConstantFPSDNode>(" +
+               Val + ")->getValueType(0));");
+      // Add Tmp<ResNo> to VariableMap, so that we don't multiply select this
+      // value if used multiple times by this pattern result.
+      Val = TmpVar;
+      ModifiedVal = true;
+      NodeOps.push_back(getValueName(Val));
+    } else if (!N->isLeaf() && N->getOperator()->getName() == "texternalsym"){
+      Record *Op = OperatorMap[N->getName()];
+      // Transform ExternalSymbol to TargetExternalSymbol
+      if (Op && Op->getName() == "externalsym") {
+        std::string TmpVar = "Tmp"+utostr(ResNo);
+        emitCode("SDValue " + TmpVar + " = CurDAG->getTarget"
+                 "ExternalSymbol(cast<ExternalSymbolSDNode>(" +
+                 Val + ")->getSymbol(), " +
                  getEnumName(N->getTypeNum(0)) + ");");
-        // Add Tmp<ResNo> to VariableMap, so that we don't multiply select this
-        // value if used multiple times by this pattern result.
+        // Add Tmp<ResNo> to VariableMap, so that we don't multiply select
+        // this value if used multiple times by this pattern result.
         Val = TmpVar;
         ModifiedVal = true;
-        NodeOps.push_back(getValueName(Val));
-      } else if (!N->isLeaf() && N->getOperator()->getName() == "fpimm") {
-        assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
-        std::string TmpVar =  "Tmp" + utostr(ResNo);
-        emitCode("SDValue " + TmpVar + 
-                 " = CurDAG->getTargetConstantFP(*cast<ConstantFPSDNode>(" + 
-                 Val + ")->getConstantFPValue(), cast<ConstantFPSDNode>(" +
-                 Val + ")->getValueType(0));");
-        // Add Tmp<ResNo> to VariableMap, so that we don't multiply select this
-        // value if used multiple times by this pattern result.
+      }
+      NodeOps.push_back(getValueName(Val));
+    } else if (!N->isLeaf() && (N->getOperator()->getName() == "tglobaladdr"
+                                || N->getOperator()->getName() == "tglobaltlsaddr")) {
+      Record *Op = OperatorMap[N->getName()];
+      // Transform GlobalAddress to TargetGlobalAddress
+      if (Op && (Op->getName() == "globaladdr" ||
+                 Op->getName() == "globaltlsaddr")) {
+        std::string TmpVar = "Tmp" + utostr(ResNo);
+        emitCode("SDValue " + TmpVar + " = CurDAG->getTarget"
+                 "GlobalAddress(cast<GlobalAddressSDNode>(" + Val +
+                 ")->getGlobal(), " + getEnumName(N->getTypeNum(0)) +
+                 ");");
+        // Add Tmp<ResNo> to VariableMap, so that we don't multiply select
+        // this value if used multiple times by this pattern result.
         Val = TmpVar;
         ModifiedVal = true;
-        NodeOps.push_back(getValueName(Val));
-      } else if (!N->isLeaf() && N->getOperator()->getName() == "texternalsym"){
-        Record *Op = OperatorMap[N->getName()];
-        // Transform ExternalSymbol to TargetExternalSymbol
-        if (Op && Op->getName() == "externalsym") {
-          std::string TmpVar = "Tmp"+utostr(ResNo);
-          emitCode("SDValue " + TmpVar + " = CurDAG->getTarget"
-                   "ExternalSymbol(cast<ExternalSymbolSDNode>(" +
-                   Val + ")->getSymbol(), " +
-                   getEnumName(N->getTypeNum(0)) + ");");
-          // Add Tmp<ResNo> to VariableMap, so that we don't multiply select
-          // this value if used multiple times by this pattern result.
-          Val = TmpVar;
-          ModifiedVal = true;
-        }
-        NodeOps.push_back(getValueName(Val));
-      } else if (!N->isLeaf() && (N->getOperator()->getName() == "tglobaladdr"
-                 || N->getOperator()->getName() == "tglobaltlsaddr")) {
-        Record *Op = OperatorMap[N->getName()];
-        // Transform GlobalAddress to TargetGlobalAddress
-        if (Op && (Op->getName() == "globaladdr" ||
-                   Op->getName() == "globaltlsaddr")) {
-          std::string TmpVar = "Tmp" + utostr(ResNo);
-          emitCode("SDValue " + TmpVar + " = CurDAG->getTarget"
-                   "GlobalAddress(cast<GlobalAddressSDNode>(" + Val +
-                   ")->getGlobal(), " + getEnumName(N->getTypeNum(0)) +
-                   ");");
-          // Add Tmp<ResNo> to VariableMap, so that we don't multiply select
-          // this value if used multiple times by this pattern result.
-          Val = TmpVar;
-          ModifiedVal = true;
-        }
-        NodeOps.push_back(getValueName(Val));
-      } else if (!N->isLeaf()
-                 && (N->getOperator()->getName() == "texternalsym"
-                      || N->getOperator()->getName() == "tconstpool")) {
-        // Do not rewrite the variable name, since we don't generate a new
-        // temporary.
-        NodeOps.push_back(getValueName(Val));
-      } else if (N->isLeaf() && (CP = NodeGetComplexPattern(N, CGP))) {
-        for (unsigned i = 0; i < CP->getNumOperands(); ++i) {
-          NodeOps.push_back(getValueName("CPTmp" + Val + "_" + utostr(i)));
-        }
-      } else {
-        // This node, probably wrapped in a SDNodeXForm, behaves like a leaf
-        // node even if it isn't one. Don't select it.
-        if (!LikeLeaf) {
-          if (isRoot && N->isLeaf()) {
-            emitCode("ReplaceUses(SDValue(N, 0), " + Val + ");");
-            emitCode("return NULL;");
-          }
-        }
-        NodeOps.push_back(getValueName(Val));
       }
-
-      if (ModifiedVal) {
-        VariableMap[VarName] = Val;
+      NodeOps.push_back(getValueName(Val));
+    } else if (!N->isLeaf()
+               && (N->getOperator()->getName() == "texternalsym" ||
+                   N->getOperator()->getName() == "tconstpool")) {
+      // Do not rewrite the variable name, since we don't generate a new
+      // temporary.
+      NodeOps.push_back(getValueName(Val));
+    } else if (N->isLeaf() && (CP = N->getComplexPatternInfo(CGP))) {
+      for (unsigned i = 0; i < CP->getNumOperands(); ++i) {
+        NodeOps.push_back(getValueName("CPTmp" + Val + "_" + utostr(i)));
       }
-      return NodeOps;
-    }
-    if (N->isLeaf()) {
-      // If this is an explicit register reference, handle it.
-      if (DefInit *DI = dynamic_cast<DefInit*>(N->getLeafValue())) {
-        unsigned ResNo = TmpNo++;
-        if (DI->getDef()->isSubClassOf("Register")) {
-          emitCode("SDValue Tmp" + utostr(ResNo) + " = CurDAG->getRegister(" +
-                   getQualifiedName(DI->getDef()) + ", " +
-                   getEnumName(N->getTypeNum(0)) + ");");
-          NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
-          return NodeOps;
-        } else if (DI->getDef()->getName() == "zero_reg") {
-          emitCode("SDValue Tmp" + utostr(ResNo) +
-                   " = CurDAG->getRegister(0, " +
-                   getEnumName(N->getTypeNum(0)) + ");");
-          NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
-          return NodeOps;
-        } else if (DI->getDef()->isSubClassOf("RegisterClass")) {
-          // Handle a reference to a register class. This is used
-          // in COPY_TO_SUBREG instructions.
-          emitCode("SDValue Tmp" + utostr(ResNo) +
-                   " = CurDAG->getTargetConstant(" +
-                   getQualifiedName(DI->getDef()) + "RegClassID, " +
-                   "MVT::i32);");
-          NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
-          return NodeOps;
+    } else {
+      // This node, probably wrapped in a SDNodeXForm, behaves like a leaf
+      // node even if it isn't one. Don't select it.
+      if (!LikeLeaf) {
+        if (isRoot && N->isLeaf()) {
+          emitCode("ReplaceUses(SDValue(N, 0), " + Val + ");");
+          emitCode("return NULL;");
         }
-      } else if (IntInit *II = dynamic_cast<IntInit*>(N->getLeafValue())) {
-        unsigned ResNo = TmpNo++;
-        assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
-        emitCode("SDValue Tmp" + utostr(ResNo) + 
-                 " = CurDAG->getTargetConstant(0x" + 
-                 utohexstr((uint64_t) II->getValue()) +
-                 "ULL, " + getEnumName(N->getTypeNum(0)) + ");");
+      }
+      NodeOps.push_back(getValueName(Val));
+    }
+    
+    if (ModifiedVal)
+      VariableMap[VarName] = Val;
+    return NodeOps;
+  }
+  if (N->isLeaf()) {
+    // If this is an explicit register reference, handle it.
+    if (DefInit *DI = dynamic_cast<DefInit*>(N->getLeafValue())) {
+      unsigned ResNo = TmpNo++;
+      if (DI->getDef()->isSubClassOf("Register")) {
+        emitCode("SDValue Tmp" + utostr(ResNo) + " = CurDAG->getRegister(" +
+                 getQualifiedName(DI->getDef()) + ", " +
+                 getEnumName(N->getTypeNum(0)) + ");");
+        NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
+        return NodeOps;
+      } else if (DI->getDef()->getName() == "zero_reg") {
+        emitCode("SDValue Tmp" + utostr(ResNo) +
+                 " = CurDAG->getRegister(0, " +
+                 getEnumName(N->getTypeNum(0)) + ");");
+        NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
+        return NodeOps;
+      } else if (DI->getDef()->isSubClassOf("RegisterClass")) {
+        // Handle a reference to a register class. This is used
+        // in COPY_TO_SUBREG instructions.
+        emitCode("SDValue Tmp" + utostr(ResNo) +
+                 " = CurDAG->getTargetConstant(" +
+                 getQualifiedName(DI->getDef()) + "RegClassID, " +
+                 "MVT::i32);");
         NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
         return NodeOps;
       }
+    } else if (IntInit *II = dynamic_cast<IntInit*>(N->getLeafValue())) {
+      unsigned ResNo = TmpNo++;
+      assert(N->getExtTypes().size() == 1 && "Multiple types not handled!");
+      emitCode("SDValue Tmp" + utostr(ResNo) + 
+               " = CurDAG->getTargetConstant(0x" + 
+               utohexstr((uint64_t) II->getValue()) +
+               "ULL, " + getEnumName(N->getTypeNum(0)) + ");");
+      NodeOps.push_back(getValueName("Tmp" + utostr(ResNo)));
+      return NodeOps;
+    }
     
 #ifndef NDEBUG
-      N->dump();
+    N->dump();
 #endif
-      assert(0 && "Unknown leaf type!");
-      return NodeOps;
+    assert(0 && "Unknown leaf type!");
+    return NodeOps;
+  }
+  
+  Record *Op = N->getOperator();
+  if (Op->isSubClassOf("Instruction")) {
+    const CodeGenTarget &CGT = CGP.getTargetInfo();
+    CodeGenInstruction &II = CGT.getInstruction(Op->getName());
+    const DAGInstruction &Inst = CGP.getInstruction(Op);
+    const TreePattern *InstPat = Inst.getPattern();
+    // FIXME: Assume actual pattern comes before "implicit".
+    TreePatternNode *InstPatNode =
+    isRoot ? (InstPat ? InstPat->getTree(0) : Pattern)
+    : (InstPat ? InstPat->getTree(0) : NULL);
+    if (InstPatNode && !InstPatNode->isLeaf() &&
+        InstPatNode->getOperator()->getName() == "set") {
+      InstPatNode = InstPatNode->getChild(InstPatNode->getNumChildren()-1);
     }
-
-    Record *Op = N->getOperator();
-    if (Op->isSubClassOf("Instruction")) {
-      const CodeGenTarget &CGT = CGP.getTargetInfo();
-      CodeGenInstruction &II = CGT.getInstruction(Op->getName());
-      const DAGInstruction &Inst = CGP.getInstruction(Op);
-      const TreePattern *InstPat = Inst.getPattern();
-      // FIXME: Assume actual pattern comes before "implicit".
-      TreePatternNode *InstPatNode =
-        isRoot ? (InstPat ? InstPat->getTree(0) : Pattern)
-               : (InstPat ? InstPat->getTree(0) : NULL);
-      if (InstPatNode && !InstPatNode->isLeaf() &&
-          InstPatNode->getOperator()->getName() == "set") {
-        InstPatNode = InstPatNode->getChild(InstPatNode->getNumChildren()-1);
-      }
-      bool IsVariadic = isRoot && II.isVariadic;
-      // FIXME: fix how we deal with physical register operands.
-      bool HasImpInputs  = isRoot && Inst.getNumImpOperands() > 0;
-      bool HasImpResults = isRoot && DstRegs.size() > 0;
-      bool NodeHasOptInFlag = isRoot &&
-        PatternHasProperty(Pattern, SDNPOptInFlag, CGP);
-      bool NodeHasInFlag  = isRoot &&
-        PatternHasProperty(Pattern, SDNPInFlag, CGP);
-      bool NodeHasOutFlag = isRoot &&
-        PatternHasProperty(Pattern, SDNPOutFlag, CGP);
-      bool NodeHasChain = InstPatNode &&
-        PatternHasProperty(InstPatNode, SDNPHasChain, CGP);
-      bool InputHasChain = isRoot &&
-        NodeHasProperty(Pattern, SDNPHasChain, CGP);
-      unsigned NumResults = Inst.getNumResults();    
-      unsigned NumDstRegs = HasImpResults ? DstRegs.size() : 0;
-
-      // Record output varargs info.
-      OutputIsVariadic = IsVariadic;
-
-      if (NodeHasOptInFlag) {
-        emitCode("bool HasInFlag = "
-                   "(N->getOperand(N->getNumOperands()-1).getValueType() == "
-                   "MVT::Flag);");
-      }
-      if (IsVariadic)
-        emitCode("SmallVector<SDValue, 8> Ops" + utostr(OpcNo) + ";");
-
-      // How many results is this pattern expected to produce?
-      unsigned NumPatResults = 0;
-      for (unsigned i = 0, e = Pattern->getExtTypes().size(); i != e; i++) {
-        MVT::SimpleValueType VT = Pattern->getTypeNum(i);
-        if (VT != MVT::isVoid && VT != MVT::Flag)
-          NumPatResults++;
+    bool IsVariadic = isRoot && II.isVariadic;
+    // FIXME: fix how we deal with physical register operands.
+    bool HasImpInputs  = isRoot && Inst.getNumImpOperands() > 0;
+    bool HasImpResults = isRoot && DstRegs.size() > 0;
+    bool NodeHasOptInFlag = isRoot &&
+      Pattern->TreeHasProperty(SDNPOptInFlag, CGP);
+    bool NodeHasInFlag  = isRoot &&
+      Pattern->TreeHasProperty(SDNPInFlag, CGP);
+    bool NodeHasOutFlag = isRoot &&
+      Pattern->TreeHasProperty(SDNPOutFlag, CGP);
+    bool NodeHasChain = InstPatNode &&
+      InstPatNode->TreeHasProperty(SDNPHasChain, CGP);
+    bool InputHasChain = isRoot && Pattern->NodeHasProperty(SDNPHasChain, CGP);
+    unsigned NumResults = Inst.getNumResults();    
+    unsigned NumDstRegs = HasImpResults ? DstRegs.size() : 0;
+    
+    // Record output varargs info.
+    OutputIsVariadic = IsVariadic;
+    
+    if (NodeHasOptInFlag) {
+      emitCode("bool HasInFlag = "
+               "(N->getOperand(N->getNumOperands()-1).getValueType() == "
+               "MVT::Flag);");
+    }
+    if (IsVariadic)
+      emitCode("SmallVector<SDValue, 8> Ops" + utostr(OpcNo) + ";");
+    
+    // How many results is this pattern expected to produce?
+    unsigned NumPatResults = 0;
+    for (unsigned i = 0, e = Pattern->getExtTypes().size(); i != e; i++) {
+      MVT::SimpleValueType VT = Pattern->getTypeNum(i);
+      if (VT != MVT::isVoid && VT != MVT::Flag)
+        NumPatResults++;
+    }
+    
+    if (OrigChains.size() > 0) {
+      // The original input chain is being ignored. If it is not just
+      // pointing to the op that's being folded, we should create a
+      // TokenFactor with it and the chain of the folded op as the new chain.
+      // We could potentially be doing multiple levels of folding, in that
+      // case, the TokenFactor can have more operands.
+      emitCode("SmallVector<SDValue, 8> InChains;");
+      for (unsigned i = 0, e = OrigChains.size(); i < e; ++i) {
+        emitCode("if (" + OrigChains[i].first + ".getNode() != " +
+                 OrigChains[i].second + ".getNode()) {");
+        emitCode("  InChains.push_back(" + OrigChains[i].first + ");");
+        emitCode("}");
       }
-
-      if (OrigChains.size() > 0) {
-        // The original input chain is being ignored. If it is not just
-        // pointing to the op that's being folded, we should create a
-        // TokenFactor with it and the chain of the folded op as the new chain.
-        // We could potentially be doing multiple levels of folding, in that
-        // case, the TokenFactor can have more operands.
-        emitCode("SmallVector<SDValue, 8> InChains;");
-        for (unsigned i = 0, e = OrigChains.size(); i < e; ++i) {
-          emitCode("if (" + OrigChains[i].first + ".getNode() != " +
-                   OrigChains[i].second + ".getNode()) {");
-          emitCode("  InChains.push_back(" + OrigChains[i].first + ");");
-          emitCode("}");
-        }
-        emitCode("InChains.push_back(" + ChainName + ");");
-        emitCode(ChainName + " = CurDAG->getNode(ISD::TokenFactor, "
-                 "N->getDebugLoc(), MVT::Other, "
-                 "&InChains[0], InChains.size());");
-        if (GenDebug) {
-          emitCode("CurDAG->setSubgraphColor(" + ChainName +".getNode(), \"yellow\");");
-          emitCode("CurDAG->setSubgraphColor(" + ChainName +".getNode(), \"black\");");
-        }
+      emitCode("InChains.push_back(" + ChainName + ");");
+      emitCode(ChainName + " = CurDAG->getNode(ISD::TokenFactor, "
+               "N->getDebugLoc(), MVT::Other, "
+               "&InChains[0], InChains.size());");
+      if (GenDebug) {
+        emitCode("CurDAG->setSubgraphColor(" + ChainName +".getNode(), \"yellow\");");
+        emitCode("CurDAG->setSubgraphColor(" + ChainName +".getNode(), \"black\");");
       }
-
-      // Loop over all of the operands of the instruction pattern, emitting code
-      // to fill them all in.  The node 'N' usually has number children equal to
-      // the number of input operands of the instruction.  However, in cases
-      // where there are predicate operands for an instruction, we need to fill
-      // in the 'execute always' values.  Match up the node operands to the
-      // instruction operands to do this.
-      std::vector<std::string> AllOps;
-      for (unsigned ChildNo = 0, InstOpNo = NumResults;
-           InstOpNo != II.OperandList.size(); ++InstOpNo) {
-        std::vector<std::string> Ops;
-        
-        // Determine what to emit for this operand.
-        Record *OperandNode = II.OperandList[InstOpNo].Rec;
-        if ((OperandNode->isSubClassOf("PredicateOperand") ||
-             OperandNode->isSubClassOf("OptionalDefOperand")) &&
-            !CGP.getDefaultOperand(OperandNode).DefaultOps.empty()) {
-          // This is a predicate or optional def operand; emit the
-          // 'default ops' operands.
-          const DAGDefaultOperand &DefaultOp =
-            CGP.getDefaultOperand(II.OperandList[InstOpNo].Rec);
-          for (unsigned i = 0, e = DefaultOp.DefaultOps.size(); i != e; ++i) {
-            Ops = EmitResultCode(DefaultOp.DefaultOps[i], DstRegs,
-                                 InFlagDecled, ResNodeDecled);
-            AllOps.insert(AllOps.end(), Ops.begin(), Ops.end());
-          }
-        } else {
-          // Otherwise this is a normal operand or a predicate operand without
-          // 'execute always'; emit it.
-          Ops = EmitResultCode(N->getChild(ChildNo), DstRegs,
+    }
+    
+    // Loop over all of the operands of the instruction pattern, emitting code
+    // to fill them all in.  The node 'N' usually has number children equal to
+    // the number of input operands of the instruction.  However, in cases
+    // where there are predicate operands for an instruction, we need to fill
+    // in the 'execute always' values.  Match up the node operands to the
+    // instruction operands to do this.
+    std::vector<std::string> AllOps;
+    for (unsigned ChildNo = 0, InstOpNo = NumResults;
+         InstOpNo != II.OperandList.size(); ++InstOpNo) {
+      std::vector<std::string> Ops;
+      
+      // Determine what to emit for this operand.
+      Record *OperandNode = II.OperandList[InstOpNo].Rec;
+      if ((OperandNode->isSubClassOf("PredicateOperand") ||
+           OperandNode->isSubClassOf("OptionalDefOperand")) &&
+          !CGP.getDefaultOperand(OperandNode).DefaultOps.empty()) {
+        // This is a predicate or optional def operand; emit the
+        // 'default ops' operands.
+        const DAGDefaultOperand &DefaultOp =
+        CGP.getDefaultOperand(II.OperandList[InstOpNo].Rec);
+        for (unsigned i = 0, e = DefaultOp.DefaultOps.size(); i != e; ++i) {
+          Ops = EmitResultCode(DefaultOp.DefaultOps[i], DstRegs,
                                InFlagDecled, ResNodeDecled);
           AllOps.insert(AllOps.end(), Ops.begin(), Ops.end());
-          ++ChildNo;
         }
-      }
-
-      // Emit all the chain and CopyToReg stuff.
-      bool ChainEmitted = NodeHasChain;
-      if (NodeHasInFlag || HasImpInputs)
-        EmitInFlagSelectCode(Pattern, "N", ChainEmitted,
-                             InFlagDecled, ResNodeDecled, true);
-      if (NodeHasOptInFlag || NodeHasInFlag || HasImpInputs) {
-        if (!InFlagDecled) {
-          emitCode("SDValue InFlag(0, 0);");
-          InFlagDecled = true;
-        }
-        if (NodeHasOptInFlag) {
-          emitCode("if (HasInFlag) {");
-          emitCode("  InFlag = N->getOperand(N->getNumOperands()-1);");
-          emitCode("}");
-        }
-      }
-
-      unsigned ResNo = TmpNo++;
-
-      unsigned OpsNo = OpcNo;
-      std::string CodePrefix;
-      bool ChainAssignmentNeeded = NodeHasChain && !isRoot;
-      std::deque<std::string> After;
-      std::string NodeName;
-      if (!isRoot) {
-        NodeName = "Tmp" + utostr(ResNo);
-        CodePrefix = "SDValue " + NodeName + "(";
       } else {
-        NodeName = "ResNode";
-        if (!ResNodeDecled) {
-          CodePrefix = "SDNode *" + NodeName + " = ";
-          ResNodeDecled = true;
-        } else
-          CodePrefix = NodeName + " = ";
-      }
-
-      std::string Code = "Opc" + utostr(OpcNo);
-
-      if (!isRoot || (InputHasChain && !NodeHasChain))
-        // For call to "getMachineNode()".
-        Code += ", N->getDebugLoc()";
-
-      emitOpcode(II.Namespace + "::" + II.TheDef->getName());
-
-      // Output order: results, chain, flags
-      // Result types.
-      if (NumResults > 0 && N->getTypeNum(0) != MVT::isVoid) {
-        Code += ", VT" + utostr(VTNo);
-        emitVT(getEnumName(N->getTypeNum(0)));
+        // Otherwise this is a normal operand or a predicate operand without
+        // 'execute always'; emit it.
+        Ops = EmitResultCode(N->getChild(ChildNo), DstRegs,
+                             InFlagDecled, ResNodeDecled);
+        AllOps.insert(AllOps.end(), Ops.begin(), Ops.end());
+        ++ChildNo;
       }
-      // Add types for implicit results in physical registers, scheduler will
-      // care of adding copyfromreg nodes.
-      for (unsigned i = 0; i < NumDstRegs; i++) {
-        Record *RR = DstRegs[i];
-        if (RR->isSubClassOf("Register")) {
-          MVT::SimpleValueType RVT = getRegisterValueType(RR, CGT);
-          Code += ", " + getEnumName(RVT);
-        }
+    }
+    
+    // Emit all the chain and CopyToReg stuff.
+    bool ChainEmitted = NodeHasChain;
+    if (NodeHasInFlag || HasImpInputs)
+      EmitInFlagSelectCode(Pattern, "N", ChainEmitted,
+                           InFlagDecled, ResNodeDecled, true);
+    if (NodeHasOptInFlag || NodeHasInFlag || HasImpInputs) {
+      if (!InFlagDecled) {
+        emitCode("SDValue InFlag(0, 0);");
+        InFlagDecled = true;
       }
-      if (NodeHasChain)
-        Code += ", MVT::Other";
-      if (NodeHasOutFlag)
-        Code += ", MVT::Flag";
-
-      // Inputs.
-      if (IsVariadic) {
-        for (unsigned i = 0, e = AllOps.size(); i != e; ++i)
-          emitCode("Ops" + utostr(OpsNo) + ".push_back(" + AllOps[i] + ");");
-        AllOps.clear();
-
-        // Figure out whether any operands at the end of the op list are not
-        // part of the variable section.
-        std::string EndAdjust;
-        if (NodeHasInFlag || HasImpInputs)
-          EndAdjust = "-1";  // Always has one flag.
-        else if (NodeHasOptInFlag)
-          EndAdjust = "-(HasInFlag?1:0)"; // May have a flag.
-
-        emitCode("for (unsigned i = NumInputRootOps + " + utostr(NodeHasChain) +
-                 ", e = N->getNumOperands()" + EndAdjust + "; i != e; ++i) {");
-
-        emitCode("  Ops" + utostr(OpsNo) + ".push_back(N->getOperand(i));");
+      if (NodeHasOptInFlag) {
+        emitCode("if (HasInFlag) {");
+        emitCode("  InFlag = N->getOperand(N->getNumOperands()-1);");
         emitCode("}");
       }
-
-      // Populate MemRefs with entries for each memory accesses covered by 
-      // this pattern.
-      if (isRoot && !LSI.empty()) {
-        std::string MemRefs = "MemRefs" + utostr(OpsNo);
-        emitCode("MachineSDNode::mmo_iterator " + MemRefs + " = "
-                 "MF->allocateMemRefsArray(" + utostr(LSI.size()) + ");");
-        for (unsigned i = 0, e = LSI.size(); i != e; ++i)
-          emitCode(MemRefs + "[" + utostr(i) + "] = "
-                   "cast<MemSDNode>(" + LSI[i] + ")->getMemOperand();");
-        After.push_back("cast<MachineSDNode>(ResNode)->setMemRefs(" +
-                        MemRefs + ", " + MemRefs + " + " + utostr(LSI.size()) +
-                        ");");
+    }
+    
+    unsigned ResNo = TmpNo++;
+    
+    unsigned OpsNo = OpcNo;
+    std::string CodePrefix;
+    bool ChainAssignmentNeeded = NodeHasChain && !isRoot;
+    std::deque<std::string> After;
+    std::string NodeName;
+    if (!isRoot) {
+      NodeName = "Tmp" + utostr(ResNo);
+      CodePrefix = "SDValue " + NodeName + "(";
+    } else {
+      NodeName = "ResNode";
+      if (!ResNodeDecled) {
+        CodePrefix = "SDNode *" + NodeName + " = ";
+        ResNodeDecled = true;
+      } else
+        CodePrefix = NodeName + " = ";
+    }
+    
+    std::string Code = "Opc" + utostr(OpcNo);
+    
+    if (!isRoot || (InputHasChain && !NodeHasChain))
+      // For call to "getMachineNode()".
+      Code += ", N->getDebugLoc()";
+    
+    emitOpcode(II.Namespace + "::" + II.TheDef->getName());
+    
+    // Output order: results, chain, flags
+    // Result types.
+    if (NumResults > 0 && N->getTypeNum(0) != MVT::isVoid) {
+      Code += ", VT" + utostr(VTNo);
+      emitVT(getEnumName(N->getTypeNum(0)));
+    }
+    // Add types for implicit results in physical registers, scheduler will
+    // care of adding copyfromreg nodes.
+    for (unsigned i = 0; i < NumDstRegs; i++) {
+      Record *RR = DstRegs[i];
+      if (RR->isSubClassOf("Register")) {
+        MVT::SimpleValueType RVT = getRegisterValueType(RR, CGT);
+        Code += ", " + getEnumName(RVT);
       }
-
-      if (NodeHasChain) {
-        if (IsVariadic)
-          emitCode("Ops" + utostr(OpsNo) + ".push_back(" + ChainName + ");");
-        else
-          AllOps.push_back(ChainName);
+    }
+    if (NodeHasChain)
+      Code += ", MVT::Other";
+    if (NodeHasOutFlag)
+      Code += ", MVT::Flag";
+    
+    // Inputs.
+    if (IsVariadic) {
+      for (unsigned i = 0, e = AllOps.size(); i != e; ++i)
+        emitCode("Ops" + utostr(OpsNo) + ".push_back(" + AllOps[i] + ");");
+      AllOps.clear();
+      
+      // Figure out whether any operands at the end of the op list are not
+      // part of the variable section.
+      std::string EndAdjust;
+      if (NodeHasInFlag || HasImpInputs)
+        EndAdjust = "-1";  // Always has one flag.
+      else if (NodeHasOptInFlag)
+        EndAdjust = "-(HasInFlag?1:0)"; // May have a flag.
+      
+      emitCode("for (unsigned i = NumInputRootOps + " + utostr(NodeHasChain) +
+               ", e = N->getNumOperands()" + EndAdjust + "; i != e; ++i) {");
+      
+      emitCode("  Ops" + utostr(OpsNo) + ".push_back(N->getOperand(i));");
+      emitCode("}");
+    }
+    
+    // Populate MemRefs with entries for each memory accesses covered by 
+    // this pattern.
+    if (isRoot && !LSI.empty()) {
+      std::string MemRefs = "MemRefs" + utostr(OpsNo);
+      emitCode("MachineSDNode::mmo_iterator " + MemRefs + " = "
+               "MF->allocateMemRefsArray(" + utostr(LSI.size()) + ");");
+      for (unsigned i = 0, e = LSI.size(); i != e; ++i)
+        emitCode(MemRefs + "[" + utostr(i) + "] = "
+                 "cast<MemSDNode>(" + LSI[i] + ")->getMemOperand();");
+      After.push_back("cast<MachineSDNode>(ResNode)->setMemRefs(" +
+                      MemRefs + ", " + MemRefs + " + " + utostr(LSI.size()) +
+                      ");");
+    }
+    
+    if (NodeHasChain) {
+      if (IsVariadic)
+        emitCode("Ops" + utostr(OpsNo) + ".push_back(" + ChainName + ");");
+      else
+        AllOps.push_back(ChainName);
+    }
+    
+    if (IsVariadic) {
+      if (NodeHasInFlag || HasImpInputs)
+        emitCode("Ops" + utostr(OpsNo) + ".push_back(InFlag);");
+      else if (NodeHasOptInFlag) {
+        emitCode("if (HasInFlag)");
+        emitCode("  Ops" + utostr(OpsNo) + ".push_back(InFlag);");
       }
-
-      if (IsVariadic) {
-        if (NodeHasInFlag || HasImpInputs)
-          emitCode("Ops" + utostr(OpsNo) + ".push_back(InFlag);");
-        else if (NodeHasOptInFlag) {
-          emitCode("if (HasInFlag)");
-          emitCode("  Ops" + utostr(OpsNo) + ".push_back(InFlag);");
-        }
-        Code += ", &Ops" + utostr(OpsNo) + "[0], Ops" + utostr(OpsNo) +
-          ".size()";
-      } else if (NodeHasInFlag || NodeHasOptInFlag || HasImpInputs)
-        AllOps.push_back("InFlag");
-
-      unsigned NumOps = AllOps.size();
-      if (NumOps) {
-        if (!NodeHasOptInFlag && NumOps < 4) {
-          for (unsigned i = 0; i != NumOps; ++i)
-            Code += ", " + AllOps[i];
-        } else {
-          std::string OpsCode = "SDValue Ops" + utostr(OpsNo) + "[] = { ";
-          for (unsigned i = 0; i != NumOps; ++i) {
-            OpsCode += AllOps[i];
-            if (i != NumOps-1)
-              OpsCode += ", ";
-          }
-          emitCode(OpsCode + " };");
-          Code += ", Ops" + utostr(OpsNo) + ", ";
-          if (NodeHasOptInFlag) {
-            Code += "HasInFlag ? ";
-            Code += utostr(NumOps) + " : " + utostr(NumOps-1);
-          } else
-            Code += utostr(NumOps);
+      Code += ", &Ops" + utostr(OpsNo) + "[0], Ops" + utostr(OpsNo) +
+      ".size()";
+    } else if (NodeHasInFlag || NodeHasOptInFlag || HasImpInputs)
+      AllOps.push_back("InFlag");
+    
+    unsigned NumOps = AllOps.size();
+    if (NumOps) {
+      if (!NodeHasOptInFlag && NumOps < 4) {
+        for (unsigned i = 0; i != NumOps; ++i)
+          Code += ", " + AllOps[i];
+      } else {
+        std::string OpsCode = "SDValue Ops" + utostr(OpsNo) + "[] = { ";
+        for (unsigned i = 0; i != NumOps; ++i) {
+          OpsCode += AllOps[i];
+          if (i != NumOps-1)
+            OpsCode += ", ";
         }
+        emitCode(OpsCode + " };");
+        Code += ", Ops" + utostr(OpsNo) + ", ";
+        if (NodeHasOptInFlag) {
+          Code += "HasInFlag ? ";
+          Code += utostr(NumOps) + " : " + utostr(NumOps-1);
+        } else
+          Code += utostr(NumOps);
       }
-          
-      if (!isRoot)
-        Code += "), 0";
-
-      std::vector<std::string> ReplaceFroms;
-      std::vector<std::string> ReplaceTos;
-      if (!isRoot) {
-        NodeOps.push_back("Tmp" + utostr(ResNo));
-      } else {
-
+    }
+    
+    if (!isRoot)
+      Code += "), 0";
+    
+    std::vector<std::string> ReplaceFroms;
+    std::vector<std::string> ReplaceTos;
+    if (!isRoot) {
+      NodeOps.push_back("Tmp" + utostr(ResNo));
+    } else {
+      
       if (NodeHasOutFlag) {
         if (!InFlagDecled) {
           After.push_back("SDValue InFlag(ResNode, " + 
@@ -1228,7 +1271,7 @@ public:
                           utostr(NumResults+NumDstRegs+(unsigned)NodeHasChain) +
                           ");");
       }
-
+      
       for (unsigned j = 0, e = FoldedChains.size(); j < e; j++) {
         ReplaceFroms.push_back("SDValue(" +
                                FoldedChains[j].first + ".getNode(), " +
@@ -1237,21 +1280,21 @@ public:
         ReplaceTos.push_back("SDValue(ResNode, " +
                              utostr(NumResults+NumDstRegs) + ")");
       }
-
+      
       if (NodeHasOutFlag) {
         if (FoldedFlag.first != "") {
           ReplaceFroms.push_back("SDValue(" + FoldedFlag.first + ".getNode(), " +
                                  utostr(FoldedFlag.second) + ")");
           ReplaceTos.push_back("InFlag");
         } else {
-          assert(NodeHasProperty(Pattern, SDNPOutFlag, CGP));
+          assert(Pattern->NodeHasProperty(SDNPOutFlag, CGP));
           ReplaceFroms.push_back("SDValue(N, " +
                                  utostr(NumPatResults + (unsigned)InputHasChain)
                                  + ")");
           ReplaceTos.push_back("InFlag");
         }
       }
-
+      
       if (!ReplaceFroms.empty() && InputHasChain) {
         ReplaceFroms.push_back("SDValue(N, " +
                                utostr(NumPatResults) + ")");
@@ -1259,7 +1302,7 @@ public:
                              ChainName + ".getResNo()" + ")");
         ChainAssignmentNeeded |= NodeHasChain;
       }
-
+      
       // User does not expect the instruction would produce a chain!
       if ((!InputHasChain && NodeHasChain) && NodeHasOutFlag) {
         ;
@@ -1270,193 +1313,97 @@ public:
                                utostr(NumPatResults) + ")");
         ReplaceTos.push_back(ChainName);
       }
-      }
-
-      if (ChainAssignmentNeeded) {
-        // Remember which op produces the chain.
-        std::string ChainAssign;
-        if (!isRoot)
-          ChainAssign = ChainName + " = SDValue(" + NodeName +
-                        ".getNode(), " + utostr(NumResults+NumDstRegs) + ");";
-        else
-          ChainAssign = ChainName + " = SDValue(" + NodeName +
-                        ", " + utostr(NumResults+NumDstRegs) + ");";
-
-        After.push_front(ChainAssign);
-      }
-
-      if (ReplaceFroms.size() == 1) {
-        After.push_back("ReplaceUses(" + ReplaceFroms[0] + ", " +
-                        ReplaceTos[0] + ");");
-      } else if (!ReplaceFroms.empty()) {
-        After.push_back("const SDValue Froms[] = {");
-        for (unsigned i = 0, e = ReplaceFroms.size(); i != e; ++i)
-          After.push_back("  " + ReplaceFroms[i] + (i + 1 != e ? "," : ""));
-        After.push_back("};");
-        After.push_back("const SDValue Tos[] = {");
-        for (unsigned i = 0, e = ReplaceFroms.size(); i != e; ++i)
-          After.push_back("  " + ReplaceTos[i] + (i + 1 != e ? "," : ""));
-        After.push_back("};");
-        After.push_back("ReplaceUses(Froms, Tos, " +
-                        itostr(ReplaceFroms.size()) + ");");
-      }
-
-      // We prefer to use SelectNodeTo since it avoids allocation when
-      // possible and it avoids CSE map recalculation for the node's
-      // users, however it's tricky to use in a non-root context.
-      //
-      // We also don't use SelectNodeTo if the pattern replacement is being
-      // used to jettison a chain result, since morphing the node in place
-      // would leave users of the chain dangling.
-      //
-      if (!isRoot || (InputHasChain && !NodeHasChain)) {
-        Code = "CurDAG->getMachineNode(" + Code;
-      } else {
-        Code = "CurDAG->SelectNodeTo(N, " + Code;
-      }
-      if (isRoot) {
-        if (After.empty())
-          CodePrefix = "return ";
-        else
-          After.push_back("return ResNode;");
-      }
-
-      emitCode(CodePrefix + Code + ");");
-
-      if (GenDebug) {
-        if (!isRoot) {
-          emitCode("CurDAG->setSubgraphColor(" + NodeName +".getNode(), \"yellow\");");
-          emitCode("CurDAG->setSubgraphColor(" + NodeName +".getNode(), \"black\");");
-        }
-        else {
-          emitCode("CurDAG->setSubgraphColor(" + NodeName +", \"yellow\");");
-          emitCode("CurDAG->setSubgraphColor(" + NodeName +", \"black\");");
-        }
-      }
-
-      for (unsigned i = 0, e = After.size(); i != e; ++i)
-        emitCode(After[i]);
-
-      return NodeOps;
     }
-    if (Op->isSubClassOf("SDNodeXForm")) {
-      assert(N->getNumChildren() == 1 && "node xform should have one child!");
-      // PatLeaf node - the operand may or may not be a leaf node. But it should
-      // behave like one.
-      std::vector<std::string> Ops =
-        EmitResultCode(N->getChild(0), DstRegs, InFlagDecled,
-                       ResNodeDecled, true);
-      unsigned ResNo = TmpNo++;
-      emitCode("SDValue Tmp" + utostr(ResNo) + " = Transform_" + Op->getName()
-               + "(" + Ops.back() + ".getNode());");
-      NodeOps.push_back("Tmp" + utostr(ResNo));
-      if (isRoot)
-        emitCode("return Tmp" + utostr(ResNo) + ".getNode();");
-      return NodeOps;
-    }
-
-    N->dump();
-    errs() << "\n";
-    throw std::string("Unknown node in result pattern!");
-  }
-
-  /// InsertOneTypeCheck - Insert a type-check for an unresolved type in 'Pat'
-  /// and add it to the tree. 'Pat' and 'Other' are isomorphic trees except that 
-  /// 'Pat' may be missing types.  If we find an unresolved type to add a check
-  /// for, this returns true otherwise false if Pat has all types.
-  bool InsertOneTypeCheck(TreePatternNode *Pat, TreePatternNode *Other,
-                          const std::string &Prefix, bool isRoot = false) {
-    // Did we find one?
-    if (Pat->getExtTypes() != Other->getExtTypes()) {
-      // Move a type over from 'other' to 'pat'.
-      Pat->setTypes(Other->getExtTypes());
-      // The top level node type is checked outside of the select function.
+    
+    if (ChainAssignmentNeeded) {
+      // Remember which op produces the chain.
+      std::string ChainAssign;
       if (!isRoot)
-        emitCheck(Prefix + ".getValueType() == " +
-                  getName(Pat->getTypeNum(0)));
-      return true;
+        ChainAssign = ChainName + " = SDValue(" + NodeName +
+        ".getNode(), " + utostr(NumResults+NumDstRegs) + ");";
+      else
+        ChainAssign = ChainName + " = SDValue(" + NodeName +
+        ", " + utostr(NumResults+NumDstRegs) + ");";
+      
+      After.push_front(ChainAssign);
     }
-  
-    unsigned OpNo =
-      (unsigned) NodeHasProperty(Pat, SDNPHasChain, CGP);
-    for (unsigned i = 0, e = Pat->getNumChildren(); i != e; ++i, ++OpNo)
-      if (InsertOneTypeCheck(Pat->getChild(i), Other->getChild(i),
-                             Prefix + utostr(OpNo)))
-        return true;
-    return false;
-  }
-
-private:
-  /// EmitInFlagSelectCode - Emit the flag operands for the DAG that is
-  /// being built.
-  void EmitInFlagSelectCode(TreePatternNode *N, const std::string &RootName,
-                            bool &ChainEmitted, bool &InFlagDecled,
-                            bool &ResNodeDecled, bool isRoot = false) {
-    const CodeGenTarget &T = CGP.getTargetInfo();
-    unsigned OpNo =
-      (unsigned) NodeHasProperty(N, SDNPHasChain, CGP);
-    bool HasInFlag = NodeHasProperty(N, SDNPInFlag, CGP);
-    for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i, ++OpNo) {
-      TreePatternNode *Child = N->getChild(i);
-      if (!Child->isLeaf()) {
-        EmitInFlagSelectCode(Child, RootName + utostr(OpNo), ChainEmitted,
-                             InFlagDecled, ResNodeDecled);
+    
+    if (ReplaceFroms.size() == 1) {
+      After.push_back("ReplaceUses(" + ReplaceFroms[0] + ", " +
+                      ReplaceTos[0] + ");");
+    } else if (!ReplaceFroms.empty()) {
+      After.push_back("const SDValue Froms[] = {");
+      for (unsigned i = 0, e = ReplaceFroms.size(); i != e; ++i)
+        After.push_back("  " + ReplaceFroms[i] + (i + 1 != e ? "," : ""));
+      After.push_back("};");
+      After.push_back("const SDValue Tos[] = {");
+      for (unsigned i = 0, e = ReplaceFroms.size(); i != e; ++i)
+        After.push_back("  " + ReplaceTos[i] + (i + 1 != e ? "," : ""));
+      After.push_back("};");
+      After.push_back("ReplaceUses(Froms, Tos, " +
+                      itostr(ReplaceFroms.size()) + ");");
+    }
+    
+    // We prefer to use SelectNodeTo since it avoids allocation when
+    // possible and it avoids CSE map recalculation for the node's
+    // users, however it's tricky to use in a non-root context.
+    //
+    // We also don't use SelectNodeTo if the pattern replacement is being
+    // used to jettison a chain result, since morphing the node in place
+    // would leave users of the chain dangling.
+    //
+    if (!isRoot || (InputHasChain && !NodeHasChain)) {
+      Code = "CurDAG->getMachineNode(" + Code;
+    } else {
+      Code = "CurDAG->SelectNodeTo(N, " + Code;
+    }
+    if (isRoot) {
+      if (After.empty())
+        CodePrefix = "return ";
+      else
+        After.push_back("return ResNode;");
+    }
+    
+    emitCode(CodePrefix + Code + ");");
+    
+    if (GenDebug) {
+      if (!isRoot) {
+        emitCode("CurDAG->setSubgraphColor(" +
+                 NodeName +".getNode(), \"yellow\");");
+        emitCode("CurDAG->setSubgraphColor(" +
+                 NodeName +".getNode(), \"black\");");
       } else {
-        if (DefInit *DI = dynamic_cast<DefInit*>(Child->getLeafValue())) {
-          if (!Child->getName().empty()) {
-            std::string Name = RootName + utostr(OpNo);
-            if (Duplicates.find(Name) != Duplicates.end())
-              // A duplicate! Do not emit a copy for this node.
-              continue;
-          }
-
-          Record *RR = DI->getDef();
-          if (RR->isSubClassOf("Register")) {
-            MVT::SimpleValueType RVT = getRegisterValueType(RR, T);
-            if (RVT == MVT::Flag) {
-              if (!InFlagDecled) {
-                emitCode("SDValue InFlag = " +
-                         getValueName(RootName + utostr(OpNo)) + ";");
-                InFlagDecled = true;
-              } else
-                emitCode("InFlag = " +
-                         getValueName(RootName + utostr(OpNo)) + ";");
-            } else {
-              if (!ChainEmitted) {
-                emitCode("SDValue Chain = CurDAG->getEntryNode();");
-                ChainName = "Chain";
-                ChainEmitted = true;
-              }
-              if (!InFlagDecled) {
-                emitCode("SDValue InFlag(0, 0);");
-                InFlagDecled = true;
-              }
-              std::string Decl = (!ResNodeDecled) ? "SDNode *" : "";
-              emitCode(Decl + "ResNode = CurDAG->getCopyToReg(" + ChainName +
-                       ", " + getNodeName(RootName) + "->getDebugLoc()" +
-                       ", " + getQualifiedName(RR) +
-                       ", " +  getValueName(RootName + utostr(OpNo)) +
-                       ", InFlag).getNode();");
-              ResNodeDecled = true;
-              emitCode(ChainName + " = SDValue(ResNode, 0);");
-              emitCode("InFlag = SDValue(ResNode, 1);");
-            }
-          }
-        }
+        emitCode("CurDAG->setSubgraphColor(" + NodeName +", \"yellow\");");
+        emitCode("CurDAG->setSubgraphColor(" + NodeName +", \"black\");");
       }
     }
-
-    if (HasInFlag) {
-      if (!InFlagDecled) {
-        emitCode("SDValue InFlag = " + getNodeName(RootName) +
-               "->getOperand(" + utostr(OpNo) + ");");
-        InFlagDecled = true;
-      } else
-        emitCode("InFlag = " + getNodeName(RootName) +
-               "->getOperand(" + utostr(OpNo) + ");");
-    }
+    
+    for (unsigned i = 0, e = After.size(); i != e; ++i)
+      emitCode(After[i]);
+    
+    return NodeOps;
   }
-};
+  if (Op->isSubClassOf("SDNodeXForm")) {
+    assert(N->getNumChildren() == 1 && "node xform should have one child!");
+    // PatLeaf node - the operand may or may not be a leaf node. But it should
+    // behave like one.
+    std::vector<std::string> Ops =
+    EmitResultCode(N->getChild(0), DstRegs, InFlagDecled,
+                   ResNodeDecled, true);
+    unsigned ResNo = TmpNo++;
+    emitCode("SDValue Tmp" + utostr(ResNo) + " = Transform_" + Op->getName()
+             + "(" + Ops.back() + ".getNode());");
+    NodeOps.push_back("Tmp" + utostr(ResNo));
+    if (isRoot)
+      emitCode("return Tmp" + utostr(ResNo) + ".getNode();");
+    return NodeOps;
+  }
+  
+  N->dump();
+  errs() << "\n";
+  throw std::string("Unknown node in result pattern!");
+}
+
 
 /// EmitCodeForPattern - Given a pattern to match, emit code to the specified
 /// stream to match the pattern, and generate the code for the match if it
@@ -1481,7 +1428,8 @@ void DAGISelEmitter::GenerateCodeForPattern(const PatternToMatch &Pattern,
   bool FoundChain = false;
   Emitter.EmitMatchCode(Pattern.getSrcPattern(), NULL, "N", "", FoundChain);
 
-  // TP - Get *SOME* tree pattern, we don't care which.
+  // TP - Get *SOME* tree pattern, we don't care which.  It is only used for
+  // diagnostics, which we know are impossible at this point.
   TreePattern &TP = *CGP.pf_begin()->second;
   
   // At this point, we know that we structurally match the pattern, but the
@@ -1497,7 +1445,7 @@ void DAGISelEmitter::GenerateCodeForPattern(const PatternToMatch &Pattern,
   // types are resolved.
   //
   TreePatternNode *Pat = Pattern.getSrcPattern()->clone();
-  RemoveAllTypes(Pat);
+  Pat->RemoveAllTypes();
   
   do {
     // Resolve/propagate as many types as possible.
@@ -1662,7 +1610,7 @@ static std::string getLegalCName(std::string OpName) {
 
 void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
   const CodeGenTarget &Target = CGP.getTargetInfo();
-  
+
   // Get the namespace to insert instructions into.
   std::string InstNS = Target.getInstNamespace();
   if (!InstNS.empty()) InstNS += "::";
@@ -1674,7 +1622,6 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
   for (CodeGenDAGPatterns::ptm_iterator I = CGP.ptm_begin(),
        E = CGP.ptm_end(); I != E; ++I) {
     const PatternToMatch &Pattern = *I;
-
     TreePatternNode *Node = Pattern.getSrcPattern();
     if (!Node->isLeaf()) {
       PatternsByOpcode[getOpcodeName(Node->getOperator(), CGP)].
@@ -1684,7 +1631,7 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
       if (dynamic_cast<IntInit*>(Node->getLeafValue())) {
         PatternsByOpcode[getOpcodeName(CGP.getSDNodeNamed("imm"), CGP)].
           push_back(&Pattern);
-      } else if ((CP = NodeGetComplexPattern(Node, CGP))) {
+      } else if ((CP = Node->getComplexPatternInfo(CGP))) {
         std::vector<Record*> OpNodes = CP->getRootNodes();
         for (unsigned j = 0, e = OpNodes.size(); j != e; j++) {
           PatternsByOpcode[getOpcodeName(OpNodes[j], CGP)]
@@ -1831,9 +1778,8 @@ void DAGISelEmitter::EmitInstructionSelector(raw_ostream &OS) {
 
         // Replace the emission code within selection routines with calls to the
         // emission functions.
-        if (GenDebug) {
+        if (GenDebug)
           GeneratedCode.push_back(std::make_pair(0, "CurDAG->setSubgraphColor(N, \"red\");"));
-        }
         CallerCode = "SDNode *Result = Emit_" + utostr(EmitFuncNum) + CallerCode;
         GeneratedCode.push_back(std::make_pair(3, CallerCode));
         if (GenDebug) {
@@ -2065,4 +2011,26 @@ void DAGISelEmitter::run(raw_ostream &OS) {
   // definitions.  Emit the resultant instruction selector.
   EmitInstructionSelector(OS);  
   
+#if 0
+  MatcherNode *Matcher = 0;
+  // Walk the patterns backwards, building a matcher for each and adding it to
+  // the matcher for the whole target.
+  for (CodeGenDAGPatterns::ptm_iterator I = CGP.ptm_begin(),
+       E = CGP.ptm_end(); I != E;) {
+    const PatternToMatch &Pattern = *--E;
+    MatcherNode *N = ConvertPatternToMatcher(Pattern, CGP);
+    
+    if (Matcher == 0)
+      Matcher = N;
+    else
+      Matcher = new PushMatcherNode(N, Matcher);
+  }
+  
+  
+  EmitMatcherTable(Matcher, OS);
+  
+  
+  //Matcher->dump();
+  delete Matcher;
+#endif
 }
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.cpp
new file mode 100644
index 0000000..1363aa3
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.cpp
@@ -0,0 +1,108 @@
+//===- DAGISelMatcher.cpp - Representation of DAG pattern matcher ---------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "DAGISelMatcher.h"
+#include "CodeGenDAGPatterns.h"
+#include "CodeGenTarget.h"
+#include "llvm/Support/raw_ostream.h"
+using namespace llvm;
+
+void MatcherNode::dump() const {
+  print(errs());
+}
+
+void EmitNodeMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "EmitNode: Src = " << *Pattern.getSrcPattern() << "\n";
+  OS.indent(indent) << "EmitNode: Dst = " << *Pattern.getDstPattern() << "\n";
+}
+
+void MatcherNodeWithChild::printChild(raw_ostream &OS, unsigned indent) const {
+  if (Child)
+    return Child->print(OS, indent);
+  OS.indent(indent) << "<null child>\n";
+}
+
+
+void PushMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "Push\n";
+  printChild(OS, indent+2);
+  Failure->print(OS, indent);
+}
+
+void RecordMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "Record\n";
+  printChild(OS, indent);
+}
+
+void MoveChildMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "MoveChild " << ChildNo << '\n';
+  printChild(OS, indent);
+}
+
+void MoveParentMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "MoveParent\n";
+  printChild(OS, indent);
+}
+
+void CheckSameMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckSame " << MatchNumber << '\n';
+  printChild(OS, indent);
+}
+
+void CheckPatternPredicateMatcherNode::
+print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckPatternPredicate " << Predicate << '\n';
+  printChild(OS, indent);
+}
+
+void CheckPredicateMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckPredicate " << PredName << '\n';
+  printChild(OS, indent);
+}
+
+void CheckOpcodeMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckOpcode " << OpcodeName << '\n';
+  printChild(OS, indent);
+}
+
+void CheckTypeMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckType " << getEnumName(Type) << '\n';
+  printChild(OS, indent);
+}
+
+void CheckIntegerMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckInteger " << Value << '\n';
+  printChild(OS, indent);
+}
+
+void CheckCondCodeMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckCondCode ISD::" << CondCodeName << '\n';
+  printChild(OS, indent);
+}
+
+void CheckValueTypeMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckValueType MVT::" << TypeName << '\n';
+  printChild(OS, indent);
+}
+
+void CheckComplexPatMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckComplexPat " << Pattern.getSelectFunc() << '\n';
+  printChild(OS, indent);
+}
+
+void CheckAndImmMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckAndImm " << Value << '\n';
+  printChild(OS, indent);
+}
+
+void CheckOrImmMatcherNode::print(raw_ostream &OS, unsigned indent) const {
+  OS.indent(indent) << "CheckOrImm " << Value << '\n';
+  printChild(OS, indent);
+}
+
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.h b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.h
new file mode 100644
index 0000000..72bdb7b
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcher.h
@@ -0,0 +1,362 @@
+//===- DAGISelMatcher.h - Representation of DAG pattern matcher -----------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef TBLGEN_DAGISELMATCHER_H
+#define TBLGEN_DAGISELMATCHER_H
+
+#include "llvm/ADT/OwningPtr.h"
+#include "llvm/ADT/StringRef.h"
+#include "llvm/CodeGen/ValueTypes.h"
+
+namespace llvm {
+  class CodeGenDAGPatterns;
+  class MatcherNode;
+  class PatternToMatch;
+  class raw_ostream;
+  class ComplexPattern;
+
+MatcherNode *ConvertPatternToMatcher(const PatternToMatch &Pattern,
+                                     const CodeGenDAGPatterns &CGP);
+
+void EmitMatcherTable(const MatcherNode *Matcher, raw_ostream &OS);
+
+  
+/// MatcherNode - Base class for all the the DAG ISel Matcher representation
+/// nodes.
+class MatcherNode {
+public:
+  enum KindTy {
+    EmitNode,
+    Push,           // [Push, Dest0, Dest1, Dest2, Dest3]
+    Record,         // [Record]
+    MoveChild,      // [MoveChild, Child#]
+    MoveParent,     // [MoveParent]
+    
+    CheckSame,      // [CheckSame, N]         Fail if not same as prev match.
+    CheckPatternPredicate,
+    CheckPredicate, // [CheckPredicate, P]    Fail if predicate fails.
+    CheckOpcode,    // [CheckOpcode, Opcode]  Fail if not opcode.
+    CheckType,      // [CheckType, MVT]       Fail if not correct type.
+    CheckInteger,   // [CheckInteger, int0,int1,int2,...int7] Fail if wrong val.
+    CheckCondCode,  // [CheckCondCode, CondCode] Fail if not condcode.
+    CheckValueType,
+    CheckComplexPat,
+    CheckAndImm,
+    CheckOrImm
+  };
+  const KindTy Kind;
+  
+protected:
+  MatcherNode(KindTy K) : Kind(K) {}
+public:
+  virtual ~MatcherNode() {}
+  
+  KindTy getKind() const { return Kind; }
+  
+  
+  static inline bool classof(const MatcherNode *) { return true; }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const = 0;
+  void dump() const;
+};
+  
+/// EmitNodeMatcherNode - This signals a successful match and generates a node.
+class EmitNodeMatcherNode : public MatcherNode {
+  const PatternToMatch &Pattern;
+public:
+  EmitNodeMatcherNode(const PatternToMatch &pattern)
+    : MatcherNode(EmitNode), Pattern(pattern) {}
+
+  const PatternToMatch &getPattern() const { return Pattern; }
+
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == EmitNode;
+  }
+
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+
+/// MatcherNodeWithChild - Every node accept the final accept state has a child
+/// that is executed after the node runs.  This class captures this commonality.
+class MatcherNodeWithChild : public MatcherNode {
+  OwningPtr<MatcherNode> Child;
+public:
+  MatcherNodeWithChild(KindTy K) : MatcherNode(K) {}
+  
+  MatcherNode *getChild() { return Child.get(); }
+  const MatcherNode *getChild() const { return Child.get(); }
+  void setChild(MatcherNode *C) { Child.reset(C); }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() != EmitNode;
+  }
+  
+protected:
+  void printChild(raw_ostream &OS, unsigned indent) const;
+};
+
+/// PushMatcherNode - This pushes a failure scope on the stack and evaluates
+/// 'child'.  If 'child' fails to match, it pops its scope and attempts to
+/// match 'Failure'.
+class PushMatcherNode : public MatcherNodeWithChild {
+  OwningPtr<MatcherNode> Failure;
+public:
+  PushMatcherNode(MatcherNode *child = 0, MatcherNode *failure = 0)
+    : MatcherNodeWithChild(Push), Failure(failure) {
+    setChild(child);
+  }
+  
+  MatcherNode *getFailure() { return Failure.get(); }
+  const MatcherNode *getFailure() const { return Failure.get(); }
+  void setFailure(MatcherNode *N) { Failure.reset(N); }
+
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == Push;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+
+/// RecordMatcherNode - Save the current node in the operand list.
+class RecordMatcherNode : public MatcherNodeWithChild {
+public:
+  RecordMatcherNode() : MatcherNodeWithChild(Record) {}
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == Record;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// MoveChildMatcherNode - This tells the interpreter to move into the
+/// specified child node.
+class MoveChildMatcherNode : public MatcherNodeWithChild {
+  unsigned ChildNo;
+public:
+  MoveChildMatcherNode(unsigned childNo)
+  : MatcherNodeWithChild(MoveChild), ChildNo(childNo) {}
+  
+  unsigned getChildNo() const { return ChildNo; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == MoveChild;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// MoveParentMatcherNode - This tells the interpreter to move to the parent
+/// of the current node.
+class MoveParentMatcherNode : public MatcherNodeWithChild {
+public:
+  MoveParentMatcherNode()
+  : MatcherNodeWithChild(MoveParent) {}
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == MoveParent;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+
+/// CheckSameMatcherNode - This checks to see if this node is exactly the same
+/// node as the specified match that was recorded with 'Record'.  This is used
+/// when patterns have the same name in them, like '(mul GPR:$in, GPR:$in)'.
+class CheckSameMatcherNode : public MatcherNodeWithChild {
+  unsigned MatchNumber;
+public:
+  CheckSameMatcherNode(unsigned matchnumber)
+  : MatcherNodeWithChild(CheckSame), MatchNumber(matchnumber) {}
+  
+  unsigned getMatchNumber() const { return MatchNumber; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckSame;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckPatternPredicateMatcherNode - This checks the target-specific predicate
+/// to see if the entire pattern is capable of matching.  This predicate does
+/// not take a node as input.  This is used for subtarget feature checks etc.
+class CheckPatternPredicateMatcherNode : public MatcherNodeWithChild {
+  std::string Predicate;
+public:
+  CheckPatternPredicateMatcherNode(StringRef predicate)
+  : MatcherNodeWithChild(CheckPatternPredicate), Predicate(predicate) {}
+  
+  StringRef getPredicate() const { return Predicate; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckPatternPredicate;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckPredicateMatcherNode - This checks the target-specific predicate to
+/// see if the node is acceptable.
+class CheckPredicateMatcherNode : public MatcherNodeWithChild {
+  StringRef PredName;
+public:
+  CheckPredicateMatcherNode(StringRef predname)
+    : MatcherNodeWithChild(CheckPredicate), PredName(predname) {}
+  
+  StringRef getPredicateName() const { return PredName; }
+
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckPredicate;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+  
+/// CheckOpcodeMatcherNode - This checks to see if the current node has the
+/// specified opcode, if not it fails to match.
+class CheckOpcodeMatcherNode : public MatcherNodeWithChild {
+  StringRef OpcodeName;
+public:
+  CheckOpcodeMatcherNode(StringRef opcodename)
+    : MatcherNodeWithChild(CheckOpcode), OpcodeName(opcodename) {}
+  
+  StringRef getOpcodeName() const { return OpcodeName; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckOpcode;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckTypeMatcherNode - This checks to see if the current node has the
+/// specified type, if not it fails to match.
+class CheckTypeMatcherNode : public MatcherNodeWithChild {
+  MVT::SimpleValueType Type;
+public:
+  CheckTypeMatcherNode(MVT::SimpleValueType type)
+    : MatcherNodeWithChild(CheckType), Type(type) {}
+  
+  MVT::SimpleValueType getType() const { return Type; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckType;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+
+/// CheckIntegerMatcherNode - This checks to see if the current node is a
+/// ConstantSDNode with the specified integer value, if not it fails to match.
+class CheckIntegerMatcherNode : public MatcherNodeWithChild {
+  int64_t Value;
+public:
+  CheckIntegerMatcherNode(int64_t value)
+    : MatcherNodeWithChild(CheckInteger), Value(value) {}
+  
+  int64_t getValue() const { return Value; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckInteger;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckCondCodeMatcherNode - This checks to see if the current node is a
+/// CondCodeSDNode with the specified condition, if not it fails to match.
+class CheckCondCodeMatcherNode : public MatcherNodeWithChild {
+  StringRef CondCodeName;
+public:
+  CheckCondCodeMatcherNode(StringRef condcodename)
+  : MatcherNodeWithChild(CheckCondCode), CondCodeName(condcodename) {}
+  
+  StringRef getCondCodeName() const { return CondCodeName; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckCondCode;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckValueTypeMatcherNode - This checks to see if the current node is a
+/// VTSDNode with the specified type, if not it fails to match.
+class CheckValueTypeMatcherNode : public MatcherNodeWithChild {
+  StringRef TypeName;
+public:
+  CheckValueTypeMatcherNode(StringRef type_name)
+  : MatcherNodeWithChild(CheckValueType), TypeName(type_name) {}
+  
+  StringRef getTypeName() const { return TypeName; }
+
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckValueType;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+  
+  
+/// CheckComplexPatMatcherNode - This node runs the specified ComplexPattern on
+/// the current node.
+class CheckComplexPatMatcherNode : public MatcherNodeWithChild {
+  const ComplexPattern &Pattern;
+public:
+  CheckComplexPatMatcherNode(const ComplexPattern &pattern)
+  : MatcherNodeWithChild(CheckComplexPat), Pattern(pattern) {}
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckComplexPat;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+/// CheckAndImmMatcherNode - This checks to see if the current node is an 'and'
+/// with something equivalent to the specified immediate.
+class CheckAndImmMatcherNode : public MatcherNodeWithChild {
+  int64_t Value;
+public:
+  CheckAndImmMatcherNode(int64_t value)
+  : MatcherNodeWithChild(CheckAndImm), Value(value) {}
+  
+  int64_t getValue() const { return Value; }
+  
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckAndImm;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+
+/// CheckOrImmMatcherNode - This checks to see if the current node is an 'and'
+/// with something equivalent to the specified immediate.
+class CheckOrImmMatcherNode : public MatcherNodeWithChild {
+  int64_t Value;
+public:
+  CheckOrImmMatcherNode(int64_t value)
+    : MatcherNodeWithChild(CheckOrImm), Value(value) {}
+  
+  int64_t getValue() const { return Value; }
+
+  static inline bool classof(const MatcherNode *N) {
+    return N->getKind() == CheckOrImm;
+  }
+  
+  virtual void print(raw_ostream &OS, unsigned indent = 0) const;
+};
+  
+  
+} // end namespace llvm
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherEmitter.cpp
new file mode 100644
index 0000000..1a41713
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherEmitter.cpp
@@ -0,0 +1,217 @@
+//===- DAGISelMatcherEmitter.cpp - Matcher Emitter ------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains code to generate C++ code a matcher.
+//
+//===----------------------------------------------------------------------===//
+
+#include "DAGISelMatcher.h"
+#include "CodeGenDAGPatterns.h"
+#include "llvm/ADT/SmallString.h"
+#include "llvm/Support/Casting.h"
+#include "llvm/Support/FormattedStream.h"
+using namespace llvm;
+
+namespace {
+enum {
+  CommentIndent = 25
+};
+}
+
+static unsigned EmitMatcherAndChildren(const MatcherNode *N,
+                                       formatted_raw_ostream &FOS,
+                                       unsigned Indent);
+
+/// ClassifyInt - Classify an integer by size, return '1','2','4','8' if this
+/// fits in 1, 2, 4, or 8 sign extended bytes.
+static char ClassifyInt(int64_t Val) {
+  if (Val == int8_t(Val))  return '1';
+  if (Val == int16_t(Val)) return '2';
+  if (Val == int32_t(Val)) return '4';
+  return '8';
+}
+
+/// EmitInt - Emit the specified integer, returning the number of bytes emitted.
+static unsigned EmitInt(int64_t Val, formatted_raw_ostream &OS) {
+  unsigned BytesEmitted = 1;
+  OS << (int)(unsigned char)Val << ", ";
+  if (Val == int8_t(Val)) {
+    OS << "\n";
+    return BytesEmitted;
+  }
+  
+  OS << (int)(unsigned char)(Val >> 8) << ", ";
+  ++BytesEmitted;
+  
+  if (Val != int16_t(Val)) {
+    OS << (int)(unsigned char)(Val >> 16) << ','
+       << (int)(unsigned char)(Val >> 24) << ',';
+    BytesEmitted += 2;
+    
+    if (Val != int32_t(Val)) {
+      OS << (int)(unsigned char)(Val >> 32) << ','
+         << (int)(unsigned char)(Val >> 40) << ','
+         << (int)(unsigned char)(Val >> 48) << ','
+         << (int)(unsigned char)(Val >> 56) << ',';
+      BytesEmitted += 4;
+    }   
+  }
+  
+  OS.PadToColumn(CommentIndent) << "// " << Val << '\n';
+  return BytesEmitted;
+}
+
+/// EmitMatcherOpcodes - Emit bytes for the specified matcher and return
+/// the number of bytes emitted.
+static unsigned EmitMatcher(const MatcherNode *N, formatted_raw_ostream &OS,
+                            unsigned Indent) {
+  OS.PadToColumn(Indent*2);
+  
+  switch (N->getKind()) {
+  case MatcherNode::Push: assert(0 && "Should be handled by caller");
+  case MatcherNode::EmitNode:
+    OS << "OPC_Emit, /*XXX*/";
+    OS.PadToColumn(CommentIndent) << "// Src: "
+      << *cast<EmitNodeMatcherNode>(N)->getPattern().getSrcPattern() << '\n';
+    OS.PadToColumn(CommentIndent) << "// Dst: "
+      << *cast<EmitNodeMatcherNode>(N)->getPattern().getDstPattern() << '\n';
+    return 1;
+  case MatcherNode::Record:
+    OS << "OPC_Record,\n";
+    return 1;
+  case MatcherNode::MoveChild:
+    OS << "OPC_MoveChild, "
+       << cast<MoveChildMatcherNode>(N)->getChildNo() << ",\n";
+    return 2;
+      
+  case MatcherNode::MoveParent:
+    OS << "OPC_MoveParent,\n";
+    return 1;
+      
+  case MatcherNode::CheckSame:
+    OS << "OPC_CheckSame, "
+       << cast<CheckSameMatcherNode>(N)->getMatchNumber() << ",\n";
+    return 2;
+
+  case MatcherNode::CheckPatternPredicate:
+    OS << "OPC_CheckPatternPredicate, /*XXX*/0,";
+    OS.PadToColumn(CommentIndent) << "// "
+      << cast<CheckPatternPredicateMatcherNode>(N)->getPredicate() << '\n';
+    return 2;
+    
+  case MatcherNode::CheckPredicate:
+    OS << "OPC_CheckPredicate, /*XXX*/0,";
+    OS.PadToColumn(CommentIndent) << "// "
+      << cast<CheckPredicateMatcherNode>(N)->getPredicateName() << '\n';
+    return 2;
+      
+  case MatcherNode::CheckOpcode:
+    OS << "OPC_CheckOpcode, "
+       << cast<CheckOpcodeMatcherNode>(N)->getOpcodeName() << ",\n";
+    return 2;
+      
+  case MatcherNode::CheckType:
+    OS << "OPC_CheckType, "
+       << getEnumName(cast<CheckTypeMatcherNode>(N)->getType()) << ",\n";
+    return 2;
+
+  case MatcherNode::CheckInteger: {
+    int64_t Val = cast<CheckIntegerMatcherNode>(N)->getValue();
+    OS << "OPC_CheckInteger" << ClassifyInt(Val) << ", ";
+    return EmitInt(Val, OS)+1;
+  }   
+  case MatcherNode::CheckCondCode:
+    OS << "OPC_CheckCondCode, ISD::"
+       << cast<CheckCondCodeMatcherNode>(N)->getCondCodeName() << ",\n";
+    return 2;
+      
+  case MatcherNode::CheckValueType:
+    OS << "OPC_CheckValueType, MVT::"
+       << cast<CheckValueTypeMatcherNode>(N)->getTypeName() << ",\n";
+    return 2;
+
+  case MatcherNode::CheckComplexPat:
+    OS << "OPC_CheckComplexPat, 0/*XXX*/,\n";
+    return 2;
+      
+  case MatcherNode::CheckAndImm: {
+    int64_t Val = cast<CheckAndImmMatcherNode>(N)->getValue();
+    OS << "OPC_CheckAndImm" << ClassifyInt(Val) << ", ";
+    return EmitInt(Val, OS)+1;
+  }
+
+  case MatcherNode::CheckOrImm: {
+    int64_t Val = cast<CheckOrImmMatcherNode>(N)->getValue();
+    OS << "OPC_CheckOrImm" << ClassifyInt(Val) << ", ";
+    return EmitInt(Val, OS)+1;
+  }
+  }
+  assert(0 && "Unreachable");
+  return 0;
+}
+
+/// EmitMatcherAndChildren - Emit the bytes for the specified matcher subtree.
+static unsigned EmitMatcherAndChildren(const MatcherNode *N,
+                                       formatted_raw_ostream &OS,
+                                       unsigned Indent) {
+  unsigned Size = 0;
+  while (1) {
+    // Push is a special case since it is binary.
+    if (const PushMatcherNode *PMN = dyn_cast<PushMatcherNode>(N)) {
+      // We need to encode the child and the offset of the failure code before
+      // emitting either of them.  Handle this by buffering the output into a
+      // string while we get the size.
+      SmallString<128> TmpBuf;
+      unsigned ChildSize;
+      {
+        raw_svector_ostream OS(TmpBuf);
+        formatted_raw_ostream FOS(OS);
+        ChildSize = 
+          EmitMatcherAndChildren(cast<PushMatcherNode>(N)->getChild(), FOS,
+                                 Indent+1);
+      }
+      
+      if (ChildSize > 255) {
+        errs() <<
+          "Tblgen internal error: can't handle predicate this complex yet\n";
+        exit(1);
+      }
+      
+      OS.PadToColumn(Indent*2);
+      OS << "OPC_Push, " << ChildSize << ",\n";
+      OS << TmpBuf.str();
+      
+      Size += 2 + ChildSize;
+      
+      N = PMN->getFailure();
+      continue;
+    }
+  
+    Size += EmitMatcher(N, OS, Indent);
+    
+    // If there are children of this node, iterate to them, otherwise we're
+    // done.
+    if (const MatcherNodeWithChild *MNWC = dyn_cast<MatcherNodeWithChild>(N))
+      N = MNWC->getChild();
+    else
+      return Size;
+  }
+}
+
+void llvm::EmitMatcherTable(const MatcherNode *Matcher, raw_ostream &O) {
+  formatted_raw_ostream OS(O);
+  
+  OS << "// The main instruction selector code.\n";
+  OS << "SDNode *SelectCode2(SDNode *N) {\n";
+
+  OS << "  static const unsigned char MatcherTable[] = {\n";
+  unsigned TotalSize = EmitMatcherAndChildren(Matcher, OS, 2);
+  OS << "    0\n  }; // Total Array size is " << (TotalSize+1) << " bytes\n\n";
+ OS << "  return SelectCodeCommon(N, MatcherTable, sizeof(MatcherTable));\n}\n";
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherGen.cpp b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherGen.cpp
new file mode 100644
index 0000000..afa2587
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/DAGISelMatcherGen.cpp
@@ -0,0 +1,287 @@
+//===- DAGISelMatcherGen.cpp - Matcher generator --------------------------===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+
+#include "DAGISelMatcher.h"
+#include "CodeGenDAGPatterns.h"
+#include "Record.h"
+#include "llvm/ADT/StringMap.h"
+using namespace llvm;
+
+namespace {
+  class MatcherGen {
+    const PatternToMatch &Pattern;
+    const CodeGenDAGPatterns &CGP;
+    
+    /// PatWithNoTypes - This is a clone of Pattern.getSrcPattern() that starts
+    /// out with all of the types removed.  This allows us to insert type checks
+    /// as we scan the tree.
+    TreePatternNode *PatWithNoTypes;
+    
+    /// VariableMap - A map from variable names ('$dst') to the recorded operand
+    /// number that they were captured as.  These are biased by 1 to make
+    /// insertion easier.
+    StringMap<unsigned> VariableMap;
+    unsigned NextRecordedOperandNo;
+    
+    MatcherNodeWithChild *Matcher;
+    MatcherNodeWithChild *CurPredicate;
+  public:
+    MatcherGen(const PatternToMatch &pattern, const CodeGenDAGPatterns &cgp);
+    
+    ~MatcherGen() {
+      delete PatWithNoTypes;
+    }
+    
+    void EmitMatcherCode();
+    
+    MatcherNodeWithChild *GetMatcher() const { return Matcher; }
+    MatcherNodeWithChild *GetCurPredicate() const { return CurPredicate; }
+  private:
+    void AddMatcherNode(MatcherNodeWithChild *NewNode);
+    void InferPossibleTypes();
+    void EmitMatchCode(const TreePatternNode *N, TreePatternNode *NodeNoTypes);
+    void EmitLeafMatchCode(const TreePatternNode *N);
+    void EmitOperatorMatchCode(const TreePatternNode *N,
+                               TreePatternNode *NodeNoTypes);
+  };
+  
+} // end anon namespace.
+
+MatcherGen::MatcherGen(const PatternToMatch &pattern,
+                       const CodeGenDAGPatterns &cgp)
+: Pattern(pattern), CGP(cgp), NextRecordedOperandNo(0),
+  Matcher(0), CurPredicate(0) {
+  // We need to produce the matcher tree for the patterns source pattern.  To do
+  // this we need to match the structure as well as the types.  To do the type
+  // matching, we want to figure out the fewest number of type checks we need to
+  // emit.  For example, if there is only one integer type supported by a
+  // target, there should be no type comparisons at all for integer patterns!
+  //
+  // To figure out the fewest number of type checks needed, clone the pattern,
+  // remove the types, then perform type inference on the pattern as a whole.
+  // If there are unresolved types, emit an explicit check for those types,
+  // apply the type to the tree, then rerun type inference.  Iterate until all
+  // types are resolved.
+  //
+  PatWithNoTypes = Pattern.getSrcPattern()->clone();
+  PatWithNoTypes->RemoveAllTypes();
+    
+  // If there are types that are manifestly known, infer them.
+  InferPossibleTypes();
+}
+
+/// InferPossibleTypes - As we emit the pattern, we end up generating type
+/// checks and applying them to the 'PatWithNoTypes' tree.  As we do this, we
+/// want to propagate implied types as far throughout the tree as possible so
+/// that we avoid doing redundant type checks.  This does the type propagation.
+void MatcherGen::InferPossibleTypes() {
+  // TP - Get *SOME* tree pattern, we don't care which.  It is only used for
+  // diagnostics, which we know are impossible at this point.
+  TreePattern &TP = *CGP.pf_begin()->second;
+  
+  try {
+    bool MadeChange = true;
+    while (MadeChange)
+      MadeChange = PatWithNoTypes->ApplyTypeConstraints(TP,
+                                                true/*Ignore reg constraints*/);
+  } catch (...) {
+    errs() << "Type constraint application shouldn't fail!";
+    abort();
+  }
+}
+
+
+/// AddMatcherNode - Add a matcher node to the current graph we're building. 
+void MatcherGen::AddMatcherNode(MatcherNodeWithChild *NewNode) {
+  if (CurPredicate != 0)
+    CurPredicate->setChild(NewNode);
+  else
+    Matcher = NewNode;
+  CurPredicate = NewNode;
+}
+
+
+
+/// EmitLeafMatchCode - Generate matching code for leaf nodes.
+void MatcherGen::EmitLeafMatchCode(const TreePatternNode *N) {
+  assert(N->isLeaf() && "Not a leaf?");
+  // Direct match against an integer constant.
+  if (IntInit *II = dynamic_cast<IntInit*>(N->getLeafValue()))
+    return AddMatcherNode(new CheckIntegerMatcherNode(II->getValue()));
+  
+  DefInit *DI = dynamic_cast<DefInit*>(N->getLeafValue());
+  if (DI == 0) {
+    errs() << "Unknown leaf kind: " << *DI << "\n";
+    abort();
+  }
+  
+  Record *LeafRec = DI->getDef();
+  if (// Handle register references.  Nothing to do here, they always match.
+      LeafRec->isSubClassOf("RegisterClass") || 
+      LeafRec->isSubClassOf("PointerLikeRegClass") ||
+      LeafRec->isSubClassOf("Register") ||
+      // Place holder for SRCVALUE nodes. Nothing to do here.
+      LeafRec->getName() == "srcvalue")
+    return;
+  
+  if (LeafRec->isSubClassOf("ValueType"))
+    return AddMatcherNode(new CheckValueTypeMatcherNode(LeafRec->getName()));
+  
+  if (LeafRec->isSubClassOf("CondCode"))
+    return AddMatcherNode(new CheckCondCodeMatcherNode(LeafRec->getName()));
+  
+  if (LeafRec->isSubClassOf("ComplexPattern")) {
+    // Handle complex pattern.
+    const ComplexPattern &CP = CGP.getComplexPattern(LeafRec);
+    return AddMatcherNode(new CheckComplexPatMatcherNode(CP));
+  }
+  
+  errs() << "Unknown leaf kind: " << *N << "\n";
+  abort();
+}
+
+void MatcherGen::EmitOperatorMatchCode(const TreePatternNode *N,
+                                       TreePatternNode *NodeNoTypes) {
+  assert(!N->isLeaf() && "Not an operator?");
+  const SDNodeInfo &CInfo = CGP.getSDNodeInfo(N->getOperator());
+  
+  // If this is an 'and R, 1234' where the operation is AND/OR and the RHS is
+  // a constant without a predicate fn that has more that one bit set, handle
+  // this as a special case.  This is usually for targets that have special
+  // handling of certain large constants (e.g. alpha with it's 8/16/32-bit
+  // handling stuff).  Using these instructions is often far more efficient
+  // than materializing the constant.  Unfortunately, both the instcombiner
+  // and the dag combiner can often infer that bits are dead, and thus drop
+  // them from the mask in the dag.  For example, it might turn 'AND X, 255'
+  // into 'AND X, 254' if it knows the low bit is set.  Emit code that checks
+  // to handle this.
+  if ((N->getOperator()->getName() == "and" || 
+       N->getOperator()->getName() == "or") &&
+      N->getChild(1)->isLeaf() && N->getChild(1)->getPredicateFns().empty()) {
+    if (IntInit *II = dynamic_cast<IntInit*>(N->getChild(1)->getLeafValue())) {
+      if (!isPowerOf2_32(II->getValue())) {  // Don't bother with single bits.
+        if (N->getOperator()->getName() == "and")
+          AddMatcherNode(new CheckAndImmMatcherNode(II->getValue()));
+        else
+          AddMatcherNode(new CheckOrImmMatcherNode(II->getValue()));
+
+        // Match the LHS of the AND as appropriate.
+        AddMatcherNode(new MoveChildMatcherNode(0));
+        EmitMatchCode(N->getChild(0), NodeNoTypes->getChild(0));
+        AddMatcherNode(new MoveParentMatcherNode());
+        return;
+      }
+    }
+  }
+  
+  // Check that the current opcode lines up.
+  AddMatcherNode(new CheckOpcodeMatcherNode(CInfo.getEnumName()));
+  
+  // If this node has a chain, then the chain is operand #0 is the SDNode, and
+  // the child numbers of the node are all offset by one.
+  unsigned OpNo = 0;
+  if (N->NodeHasProperty(SDNPHasChain, CGP))
+    OpNo = 1;
+
+  if (N->TreeHasProperty(SDNPHasChain, CGP)) {
+    // FIXME: Handle Chains with multiple uses etc.
+    //         [ld]
+    //         ^  ^
+    //         |  |
+    //        /   \---
+    //      /        [YY]
+    //      |         ^
+    //     [XX]-------|
+  }
+      
+  // FIXME: Handle Flags & .hasOneUse()
+  
+  for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i, ++OpNo) {
+    // Get the code suitable for matching this child.  Move to the child, check
+    // it then move back to the parent.
+    AddMatcherNode(new MoveChildMatcherNode(i));
+    EmitMatchCode(N->getChild(i), NodeNoTypes->getChild(i));
+    AddMatcherNode(new MoveParentMatcherNode());
+  }
+}
+
+
+void MatcherGen::EmitMatchCode(const TreePatternNode *N,
+                               TreePatternNode *NodeNoTypes) {
+  // If N and NodeNoTypes don't agree on a type, then this is a case where we
+  // need to do a type check.  Emit the check, apply the tyep to NodeNoTypes and
+  // reinfer any correlated types.
+  if (NodeNoTypes->getExtTypes() != N->getExtTypes()) {
+    AddMatcherNode(new CheckTypeMatcherNode(N->getTypeNum(0)));
+    NodeNoTypes->setTypes(N->getExtTypes());
+    InferPossibleTypes();
+  }
+  
+  
+  // If this node has a name associated with it, capture it in VariableMap. If
+  // we already saw this in the pattern, emit code to verify dagness.
+  if (!N->getName().empty()) {
+    unsigned &VarMapEntry = VariableMap[N->getName()];
+    if (VarMapEntry == 0) {
+      VarMapEntry = ++NextRecordedOperandNo;
+      AddMatcherNode(new RecordMatcherNode());
+    } else {
+      // If we get here, this is a second reference to a specific name.  Since
+      // we already have checked that the first reference is valid, we don't
+      // have to recursively match it, just check that it's the same as the
+      // previously named thing.
+      AddMatcherNode(new CheckSameMatcherNode(VarMapEntry-1));
+      return;
+    }
+  }
+  
+  // If there are node predicates for this node, generate their checks.
+  for (unsigned i = 0, e = N->getPredicateFns().size(); i != e; ++i)
+    AddMatcherNode(new CheckPredicateMatcherNode(N->getPredicateFns()[i]));
+
+  if (N->isLeaf())
+    EmitLeafMatchCode(N);
+  else
+    EmitOperatorMatchCode(N, NodeNoTypes);
+}
+
+void MatcherGen::EmitMatcherCode() {
+  // If the pattern has a predicate on it (e.g. only enabled when a subtarget
+  // feature is around, do the check).
+  if (!Pattern.getPredicateCheck().empty())
+    AddMatcherNode(new 
+                 CheckPatternPredicateMatcherNode(Pattern.getPredicateCheck()));
+  
+  // Emit the matcher for the pattern structure and types.
+  EmitMatchCode(Pattern.getSrcPattern(), PatWithNoTypes);
+}
+
+
+MatcherNode *llvm::ConvertPatternToMatcher(const PatternToMatch &Pattern,
+                                           const CodeGenDAGPatterns &CGP) {
+  MatcherGen Gen(Pattern, CGP);
+
+  // Generate the code for the matcher.
+  Gen.EmitMatcherCode();
+  
+  // If the match succeeds, then we generate Pattern.
+  EmitNodeMatcherNode *Result = new EmitNodeMatcherNode(Pattern);
+  
+  // Link it into the pattern.
+  if (MatcherNodeWithChild *Pred = Gen.GetCurPredicate()) {
+    Pred->setChild(Result);
+    return Gen.GetMatcher();
+  }
+
+  // Unconditional match.
+  return Result;
+}
+
+
+
diff --git a/libclamav/c++/llvm/utils/TableGen/EDEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/EDEmitter.cpp
new file mode 100644
index 0000000..9aad2f6
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/EDEmitter.cpp
@@ -0,0 +1,665 @@
+//===- EDEmitter.cpp - Generate instruction descriptions for ED -*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This tablegen backend is responsible for emitting a description of each
+// instruction in a format that the enhanced disassembler can use to tokenize
+// and parse instructions.
+//
+//===----------------------------------------------------------------------===//
+
+#include "EDEmitter.h"
+
+#include "AsmWriterInst.h"
+#include "CodeGenTarget.h"
+#include "Record.h"
+
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/Format.h"
+#include "llvm/Support/raw_ostream.h"
+
+#include <vector>
+#include <string>
+
+#define MAX_OPERANDS 5
+#define MAX_SYNTAXES 2
+
+using namespace llvm;
+
+///////////////////////////////////////////////////////////
+// Support classes for emitting nested C data structures //
+///////////////////////////////////////////////////////////
+
+namespace {
+  
+  class EnumEmitter {
+  private:
+    std::string Name;
+    std::vector<std::string> Entries;
+  public:
+    EnumEmitter(const char *N) : Name(N) { 
+    }
+    int addEntry(const char *e) { 
+      Entries.push_back(std::string(e));
+      return Entries.size() - 1; 
+    }
+    void emit(raw_ostream &o, unsigned int &i) {
+      o.indent(i) << "enum " << Name.c_str() << " {" << "\n";
+      i += 2;
+      
+      unsigned int index = 0;
+      unsigned int numEntries = Entries.size();
+      for(index = 0; index < numEntries; ++index) {
+        o.indent(i) << Entries[index];
+        if(index < (numEntries - 1))
+          o << ",";
+        o << "\n";
+      }
+      
+      i -= 2;
+      o.indent(i) << "};" << "\n";
+    }
+    
+    void emitAsFlags(raw_ostream &o, unsigned int &i) {
+      o.indent(i) << "enum " << Name.c_str() << " {" << "\n";
+      i += 2;
+      
+      unsigned int index = 0;
+      unsigned int numEntries = Entries.size();
+      unsigned int flag = 1;
+      for (index = 0; index < numEntries; ++index) {
+        o.indent(i) << Entries[index] << " = " << format("0x%x", flag);
+        if (index < (numEntries - 1))
+          o << ",";
+        o << "\n";
+        flag <<= 1;
+      }
+      
+      i -= 2;
+      o.indent(i) << "};" << "\n";
+    }
+  };
+
+  class StructEmitter {
+  private:
+    std::string Name;
+    std::vector<std::string> MemberTypes;
+    std::vector<std::string> MemberNames;
+  public:
+    StructEmitter(const char *N) : Name(N) {
+    }
+    void addMember(const char *t, const char *n) {
+      MemberTypes.push_back(std::string(t));
+      MemberNames.push_back(std::string(n));
+    }
+    void emit(raw_ostream &o, unsigned int &i) {
+      o.indent(i) << "struct " << Name.c_str() << " {" << "\n";
+      i += 2;
+      
+      unsigned int index = 0;
+      unsigned int numMembers = MemberTypes.size();
+      for (index = 0; index < numMembers; ++index) {
+        o.indent(i) << MemberTypes[index] << " " << MemberNames[index] << ";";
+        o << "\n";
+      }
+      
+      i -= 2;
+      o.indent(i) << "};" << "\n";
+    }
+  };
+  
+  class ConstantEmitter {
+  public:
+    virtual ~ConstantEmitter() { }
+    virtual void emit(raw_ostream &o, unsigned int &i) = 0;
+  };
+  
+  class LiteralConstantEmitter : public ConstantEmitter {
+  private:
+    std::string Literal;
+  public:
+    LiteralConstantEmitter(const char *literal) : Literal(literal) {
+    }
+    LiteralConstantEmitter(int literal) {
+      char buf[256];
+      snprintf(buf, 256, "%d", literal);
+      Literal = buf;
+    }
+    void emit(raw_ostream &o, unsigned int &i) {
+      o << Literal;
+    }
+  };
+  
+  class CompoundConstantEmitter : public ConstantEmitter {
+  private:
+    std::vector<ConstantEmitter*> Entries;
+  public:
+    CompoundConstantEmitter() {
+    }
+    ~CompoundConstantEmitter() {
+      unsigned int index;
+      unsigned int numEntries = Entries.size();
+      for (index = 0; index < numEntries; ++index) {
+        delete Entries[index];
+      }
+    }
+    CompoundConstantEmitter &addEntry(ConstantEmitter *e) {
+      Entries.push_back(e);
+      return *this;
+    }
+    void emit(raw_ostream &o, unsigned int &i) {
+      o << "{" << "\n";
+      i += 2;
+  
+      unsigned int index;
+      unsigned int numEntries = Entries.size();
+      for (index = 0; index < numEntries; ++index) {
+        o.indent(i);
+        Entries[index]->emit(o, i);
+        if (index < (numEntries - 1))
+          o << ",";
+        o << "\n";
+      }
+      
+      i -= 2;
+      o.indent(i) << "}";
+    }
+  };
+  
+  class FlagsConstantEmitter : public ConstantEmitter {
+  private:
+    std::vector<std::string> Flags;
+  public:
+    FlagsConstantEmitter() {
+    }
+    FlagsConstantEmitter &addEntry(const char *f) {
+      Flags.push_back(std::string(f));
+      return *this;
+    }
+    void emit(raw_ostream &o, unsigned int &i) {
+      unsigned int index;
+      unsigned int numFlags = Flags.size();
+      if (numFlags == 0)
+        o << "0";
+      
+      for (index = 0; index < numFlags; ++index) {
+        o << Flags[index].c_str();
+        if (index < (numFlags - 1))
+          o << " | ";
+      }
+    }
+  };
+}
+
+EDEmitter::EDEmitter(RecordKeeper &R) : Records(R) {
+}
+
+/// populateOperandOrder - Accepts a CodeGenInstruction and generates its
+///   AsmWriterInst for the desired assembly syntax, giving an ordered list of
+///   operands in the order they appear in the printed instruction.  Then, for
+///   each entry in that list, determines the index of the same operand in the
+///   CodeGenInstruction, and emits the resulting mapping into an array, filling
+///   in unused slots with -1.
+///
+/// @arg operandOrder - The array that will be populated with the operand
+///                     mapping.  Each entry will contain -1 (invalid index
+///                     into the operands present in the AsmString) or a number
+///                     representing an index in the operand descriptor array.
+/// @arg inst         - The instruction to use when looking up the operands
+/// @arg syntax       - The syntax to use, according to LLVM's enumeration
+void populateOperandOrder(CompoundConstantEmitter *operandOrder,
+                          const CodeGenInstruction &inst,
+                          unsigned syntax) {
+  unsigned int numArgs = 0;
+  
+  AsmWriterInst awInst(inst, syntax, -1, -1);
+  
+  std::vector<AsmWriterOperand>::iterator operandIterator;
+  
+  for (operandIterator = awInst.Operands.begin();
+       operandIterator != awInst.Operands.end();
+       ++operandIterator) {
+    if (operandIterator->OperandType == 
+        AsmWriterOperand::isMachineInstrOperand) {
+      char buf[2];
+      snprintf(buf, sizeof(buf), "%u", operandIterator->CGIOpNo);
+      operandOrder->addEntry(new LiteralConstantEmitter(buf));
+      numArgs++;
+    }
+  }
+  
+  for(; numArgs < MAX_OPERANDS; numArgs++) {
+    operandOrder->addEntry(new LiteralConstantEmitter("-1"));
+  }
+}
+
+/////////////////////////////////////////////////////
+// Support functions for handling X86 instructions //
+/////////////////////////////////////////////////////
+
+#define ADDFLAG(flag) flags->addEntry(flag)
+
+#define REG(str) if (name == str) { ADDFLAG("kOperandFlagRegister"); return 0; }
+#define MEM(str) if (name == str) { ADDFLAG("kOperandFlagMemory"); return 0; }
+#define LEA(str) if (name == str) { ADDFLAG("kOperandFlagEffectiveAddress"); \
+                                    return 0; }
+#define IMM(str) if (name == str) { ADDFLAG("kOperandFlagImmediate"); \
+                                    return 0; }
+#define PCR(str) if (name == str) { ADDFLAG("kOperandFlagMemory"); \
+                                    ADDFLAG("kOperandFlagPCRelative"); \
+                                    return 0; }
+
+/// X86FlagFromOpName - Processes the name of a single X86 operand (which is
+///   actually its type) and translates it into an operand flag
+///
+/// @arg flags    - The flags object to add the flag to
+/// @arg name     - The name of the operand
+static int X86FlagFromOpName(FlagsConstantEmitter *flags,
+                             const std::string &name) {
+  REG("GR8");
+  REG("GR8_NOREX");
+  REG("GR16");
+  REG("GR32");
+  REG("GR32_NOREX");
+  REG("FR32");
+  REG("RFP32");
+  REG("GR64");
+  REG("FR64");
+  REG("VR64");
+  REG("RFP64");
+  REG("RFP80");
+  REG("VR128");
+  REG("RST");
+  REG("SEGMENT_REG");
+  REG("DEBUG_REG");
+  REG("CONTROL_REG_32");
+  REG("CONTROL_REG_64");
+  
+  MEM("i8mem");
+  MEM("i8mem_NOREX");
+  MEM("i16mem");
+  MEM("i32mem");
+  MEM("f32mem");
+  MEM("ssmem");
+  MEM("opaque32mem");
+  MEM("opaque48mem");
+  MEM("i64mem");
+  MEM("f64mem");
+  MEM("sdmem");
+  MEM("f80mem");
+  MEM("opaque80mem");
+  MEM("i128mem");
+  MEM("f128mem");
+  MEM("opaque512mem");
+  
+  LEA("lea32mem");
+  LEA("lea64_32mem");
+  LEA("lea64mem");
+  
+  IMM("i8imm");
+  IMM("i16imm");
+  IMM("i16i8imm");
+  IMM("i32imm");
+  IMM("i32imm_pcrel");
+  IMM("i32i8imm");
+  IMM("i64imm");
+  IMM("i64i8imm");
+  IMM("i64i32imm");
+  IMM("i64i32imm_pcrel");
+  IMM("SSECC");
+  
+  PCR("brtarget8");
+  PCR("offset8");
+  PCR("offset16");
+  PCR("offset32");
+  PCR("offset64");
+  PCR("brtarget");
+  
+  return 1;
+}
+
+#undef REG
+#undef MEM
+#undef LEA
+#undef IMM
+#undef PCR
+#undef ADDFLAG
+
+/// X86PopulateOperands - Handles all the operands in an X86 instruction, adding
+///   the appropriate flags to their descriptors
+///
+/// @operandFlags - A reference the array of operand flag objects
+/// @inst         - The instruction to use as a source of information
+static void X86PopulateOperands(
+  FlagsConstantEmitter *(&operandFlags)[MAX_OPERANDS],
+  const CodeGenInstruction &inst) {
+  if (!inst.TheDef->isSubClassOf("X86Inst"))
+    return;
+  
+  unsigned int index;
+  unsigned int numOperands = inst.OperandList.size();
+  
+  for (index = 0; index < numOperands; ++index) {
+    const CodeGenInstruction::OperandInfo &operandInfo = 
+      inst.OperandList[index];
+    Record &rec = *operandInfo.Rec;
+    
+    if (X86FlagFromOpName(operandFlags[index], rec.getName())) {
+      errs() << "Operand type: " << rec.getName().c_str() << "\n";
+      errs() << "Operand name: " << operandInfo.Name.c_str() << "\n";
+      errs() << "Instruction mame: " << inst.TheDef->getName().c_str() << "\n";
+      llvm_unreachable("Unhandled type");
+    }
+  }
+}
+
+/// decorate1 - Decorates a named operand with a new flag
+///
+/// @operandFlags - The array of operand flag objects, which don't have names
+/// @inst         - The CodeGenInstruction, which provides a way to translate
+///                 between names and operand indices
+/// @opName       - The name of the operand
+/// @flag         - The name of the flag to add
+static inline void decorate1(FlagsConstantEmitter *(&operandFlags)[MAX_OPERANDS],
+                             const CodeGenInstruction &inst,
+                             const char *opName,
+                             const char *opFlag) {
+  unsigned opIndex;
+  
+  opIndex = inst.getOperandNamed(std::string(opName));
+  
+  operandFlags[opIndex]->addEntry(opFlag);
+}
+
+#define DECORATE1(opName, opFlag) decorate1(operandFlags, inst, opName, opFlag)
+
+#define MOV(source, target) {                       \
+  instFlags.addEntry("kInstructionFlagMove");       \
+  DECORATE1(source, "kOperandFlagSource");          \
+  DECORATE1(target, "kOperandFlagTarget");          \
+}
+
+#define BRANCH(target) {                            \
+  instFlags.addEntry("kInstructionFlagBranch");     \
+  DECORATE1(target, "kOperandFlagTarget");          \
+}
+
+#define PUSH(source) {                              \
+  instFlags.addEntry("kInstructionFlagPush");       \
+  DECORATE1(source, "kOperandFlagSource");          \
+}
+
+#define POP(target) {                               \
+  instFlags.addEntry("kInstructionFlagPop");        \
+  DECORATE1(target, "kOperandFlagTarget");          \
+}
+
+#define CALL(target) {                              \
+  instFlags.addEntry("kInstructionFlagCall");       \
+  DECORATE1(target, "kOperandFlagTarget");          \
+}
+
+#define RETURN() {                                  \
+  instFlags.addEntry("kInstructionFlagReturn");     \
+}
+
+/// X86ExtractSemantics - Performs various checks on the name of an X86
+///   instruction to determine what sort of an instruction it is and then adds 
+///   the appropriate flags to the instruction and its operands
+///
+/// @arg instFlags    - A reference to the flags for the instruction as a whole
+/// @arg operandFlags - A reference to the array of operand flag object pointers
+/// @arg inst         - A reference to the original instruction
+static void X86ExtractSemantics(FlagsConstantEmitter &instFlags,
+                                FlagsConstantEmitter *(&operandFlags)[MAX_OPERANDS],
+                                const CodeGenInstruction &inst) {
+  const std::string &name = inst.TheDef->getName();
+    
+  if (name.find("MOV") != name.npos) {
+    if (name.find("MOV_V") != name.npos) {
+      // ignore (this is a pseudoinstruction)
+    }
+    else if (name.find("MASK") != name.npos) {
+      // ignore (this is a masking move)
+    }
+    else if (name.find("r0") != name.npos) {
+      // ignore (this is a pseudoinstruction)
+    }
+    else if (name.find("PS") != name.npos ||
+             name.find("PD") != name.npos) {
+      // ignore (this is a shuffling move)
+    }
+    else if (name.find("MOVS") != name.npos) {
+      // ignore (this is a string move)
+    }
+    else if (name.find("_F") != name.npos) {
+      // TODO handle _F moves to ST(0)
+    }
+    else if (name.find("a") != name.npos) {
+      // TODO handle moves to/from %ax
+    }
+    else if (name.find("CMOV") != name.npos) {
+      MOV("src2", "dst");
+    }
+    else if (name.find("PC") != name.npos) {
+      MOV("label", "reg")
+    }
+    else {
+      MOV("src", "dst");
+    }
+  }
+  
+  if (name.find("JMP") != name.npos ||
+      name.find("J") == 0) {
+    if (name.find("FAR") != name.npos && name.find("i") != name.npos) {
+      BRANCH("off");
+    }
+    else {
+      BRANCH("dst");
+    }
+  }
+  
+  if (name.find("PUSH") != name.npos) {
+    if (name.find("FS") != name.npos ||
+        name.find("GS") != name.npos) {
+      instFlags.addEntry("kInstructionFlagPush");
+      // TODO add support for fixed operands
+    }
+    else if (name.find("F") != name.npos) {
+      // ignore (this pushes onto the FP stack)
+    }
+    else if (name[name.length() - 1] == 'm') {
+      PUSH("src");
+    }
+    else if (name.find("i") != name.npos) {
+      PUSH("imm");
+    }
+    else {
+      PUSH("reg");
+    }
+  }
+  
+  if (name.find("POP") != name.npos) {
+    if (name.find("POPCNT") != name.npos) {
+      // ignore (not a real pop)
+    }
+    else if (name.find("FS") != name.npos ||
+             name.find("GS") != name.npos) {
+      instFlags.addEntry("kInstructionFlagPop");
+      // TODO add support for fixed operands
+    }
+    else if (name.find("F") != name.npos) {
+      // ignore (this pops from the FP stack)
+    }
+    else if (name[name.length() - 1] == 'm') {
+      POP("dst");
+    }
+    else {
+      POP("reg");
+    }
+  }
+  
+  if (name.find("CALL") != name.npos) {
+    if (name.find("ADJ") != name.npos) {
+      // ignore (not a call)
+    }
+    else if (name.find("SYSCALL") != name.npos) {
+      // ignore (doesn't go anywhere we know about)
+    }
+    else if (name.find("VMCALL") != name.npos) {
+      // ignore (rather different semantics than a regular call)
+    }
+    else if (name.find("FAR") != name.npos && name.find("i") != name.npos) {
+      CALL("off");
+    }
+    else {
+      CALL("dst");
+    }
+  }
+  
+  if (name.find("RET") != name.npos) {
+    RETURN();
+  }
+}
+
+#undef MOV
+#undef BRANCH
+#undef PUSH
+#undef POP
+#undef CALL
+#undef RETURN
+
+#undef COND_DECORATE_2
+#undef COND_DECORATE_1
+#undef DECORATE1
+
+/// populateInstInfo - Fills an array of InstInfos with information about each 
+///   instruction in a target
+///
+/// @arg infoArray  - The array of InstInfo objects to populate
+/// @arg target     - The CodeGenTarget to use as a source of instructions
+static void populateInstInfo(CompoundConstantEmitter &infoArray,
+                             CodeGenTarget &target) {
+  std::vector<const CodeGenInstruction*> numberedInstructions;
+  target.getInstructionsByEnumValue(numberedInstructions);
+  
+  unsigned int index;
+  unsigned int numInstructions = numberedInstructions.size();
+  
+  for (index = 0; index < numInstructions; ++index) {
+    const CodeGenInstruction& inst = *numberedInstructions[index];
+    
+    CompoundConstantEmitter *infoStruct = new CompoundConstantEmitter;
+    infoArray.addEntry(infoStruct);
+    
+    FlagsConstantEmitter *instFlags = new FlagsConstantEmitter;
+    infoStruct->addEntry(instFlags);
+    
+    LiteralConstantEmitter *numOperandsEmitter = 
+      new LiteralConstantEmitter(inst.OperandList.size());
+    infoStruct->addEntry(numOperandsEmitter);
+                         
+    CompoundConstantEmitter *operandFlagArray = new CompoundConstantEmitter;
+    infoStruct->addEntry(operandFlagArray);
+        
+    FlagsConstantEmitter *operandFlags[MAX_OPERANDS];
+    
+    for (unsigned operandIndex = 0; operandIndex < MAX_OPERANDS; ++operandIndex) {
+      operandFlags[operandIndex] = new FlagsConstantEmitter;
+      operandFlagArray->addEntry(operandFlags[operandIndex]);
+    }
+ 
+    unsigned numSyntaxes = 0;
+    
+    if (target.getName() == "X86") {
+      X86PopulateOperands(operandFlags, inst);
+      X86ExtractSemantics(*instFlags, operandFlags, inst);
+      numSyntaxes = 2;
+    }
+    
+    CompoundConstantEmitter *operandOrderArray = new CompoundConstantEmitter;
+    infoStruct->addEntry(operandOrderArray);
+    
+    for (unsigned syntaxIndex = 0; syntaxIndex < MAX_SYNTAXES; ++syntaxIndex) {
+      CompoundConstantEmitter *operandOrder = new CompoundConstantEmitter;
+      operandOrderArray->addEntry(operandOrder);
+      
+      if (syntaxIndex < numSyntaxes) {
+        populateOperandOrder(operandOrder, inst, syntaxIndex);
+      }
+      else {
+        for (unsigned operandIndex = 0; 
+             operandIndex < MAX_OPERANDS; 
+             ++operandIndex) {
+          operandOrder->addEntry(new LiteralConstantEmitter("-1"));
+        }
+      }
+    }
+  }
+}
+
+void EDEmitter::run(raw_ostream &o) {
+  unsigned int i = 0;
+  
+  CompoundConstantEmitter infoArray;
+  CodeGenTarget target;
+  
+  populateInstInfo(infoArray, target);
+  
+  o << "InstInfo instInfo" << target.getName().c_str() << "[] = ";
+  infoArray.emit(o, i);
+  o << ";" << "\n";
+}
+
+void EDEmitter::runHeader(raw_ostream &o) {
+  EmitSourceFileHeader("Enhanced Disassembly Info Header", o);
+  
+  o << "#ifndef EDInfo_" << "\n";
+  o << "#define EDInfo_" << "\n";
+  o << "\n";
+  o << "#include <inttypes.h>" << "\n";
+  o << "\n";
+  o << "#define MAX_OPERANDS " << format("%d", MAX_OPERANDS) << "\n";
+  o << "#define MAX_SYNTAXES " << format("%d", MAX_SYNTAXES) << "\n";
+  o << "\n";
+  
+  unsigned int i = 0;
+  
+  EnumEmitter operandFlags("OperandFlags");
+  operandFlags.addEntry("kOperandFlagImmediate");
+  operandFlags.addEntry("kOperandFlagRegister");
+  operandFlags.addEntry("kOperandFlagMemory");
+  operandFlags.addEntry("kOperandFlagEffectiveAddress");
+  operandFlags.addEntry("kOperandFlagPCRelative");
+  operandFlags.addEntry("kOperandFlagSource");
+  operandFlags.addEntry("kOperandFlagTarget");
+  operandFlags.emitAsFlags(o, i);
+  
+  o << "\n";
+  
+  EnumEmitter instructionFlags("InstructionFlags");
+  instructionFlags.addEntry("kInstructionFlagMove");
+  instructionFlags.addEntry("kInstructionFlagBranch");
+  instructionFlags.addEntry("kInstructionFlagPush");
+  instructionFlags.addEntry("kInstructionFlagPop");
+  instructionFlags.addEntry("kInstructionFlagCall");
+  instructionFlags.addEntry("kInstructionFlagReturn");
+  instructionFlags.emitAsFlags(o, i);
+  
+  o << "\n";
+  
+  StructEmitter instInfo("InstInfo");
+  instInfo.addMember("uint32_t", "instructionFlags");
+  instInfo.addMember("uint8_t", "numOperands");
+  instInfo.addMember("uint8_t", "operandFlags[MAX_OPERANDS]");
+  instInfo.addMember("const char", "operandOrders[MAX_SYNTAXES][MAX_OPERANDS]");
+  instInfo.emit(o, i);
+  
+  o << "\n";
+  o << "#endif" << "\n";
+}
diff --git a/libclamav/c++/llvm/utils/TableGen/EDEmitter.h b/libclamav/c++/llvm/utils/TableGen/EDEmitter.h
new file mode 100644
index 0000000..9e40a8b
--- /dev/null
+++ b/libclamav/c++/llvm/utils/TableGen/EDEmitter.h
@@ -0,0 +1,37 @@
+//===- EDEmitter.h - Generate instruction descriptions for ED ---*- C++ -*-===//
+//
+//                     The LLVM Compiler Infrastructure
+//
+// This file is distributed under the University of Illinois Open Source
+// License. See LICENSE.TXT for details.
+//
+//===----------------------------------------------------------------------===//
+//
+// This tablegen backend is responsible for emitting a description of each
+// instruction in a format that the semantic disassembler can use to tokenize
+// and parse instructions.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef SEMANTIC_INFO_EMITTER_H
+#define SEMANTIC_INFO_EMITTER_H
+
+#include "TableGenBackend.h"
+
+namespace llvm {
+  
+  class EDEmitter : public TableGenBackend {
+    RecordKeeper &Records;
+  public:
+    EDEmitter(RecordKeeper &R);
+    
+    // run - Output the instruction table.
+    void run(raw_ostream &o);
+    
+    // runHeader - Emit a header file that allows use of the instruction table.
+    void runHeader(raw_ostream &o);
+  };
+  
+} // End llvm namespace
+
+#endif
diff --git a/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
index cf40c78..898c92a 100644
--- a/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/InstrInfoEmitter.cpp
@@ -118,7 +118,20 @@ InstrInfoEmitter::GetOperandInfo(const CodeGenInstruction &Inst) {
         Res += "|(1<<TOI::OptionalDef)";
 
       // Fill in constraint info.
-      Res += ", " + Inst.OperandList[i].Constraints[j];
+      Res += ", ";
+      
+      const CodeGenInstruction::ConstraintInfo &Constraint =
+        Inst.OperandList[i].Constraints[j];
+      if (Constraint.isNone())
+        Res += "0";
+      else if (Constraint.isEarlyClobber())
+        Res += "(1 << TOI::EARLY_CLOBBER)";
+      else {
+        assert(Constraint.isTied());
+        Res += "((" + utostr(Constraint.getTiedOperand()) +
+                    " << 16) | (1 << TOI::TIED_TO))";
+      }
+        
       Result.push_back(Res);
     }
   }
@@ -346,7 +359,7 @@ void InstrInfoEmitter::emitShiftedValue(Record *R, StringInit *Val,
         R->getName() != "IMPLICIT_DEF" &&
         R->getName() != "SUBREG_TO_REG" &&
         R->getName() != "COPY_TO_REGCLASS" &&
-        R->getName() != "DEBUG_VALUE")
+        R->getName() != "DBG_VALUE")
       throw R->getName() + " doesn't have a field named '" + 
             Val->getValue() + "'!";
     return;
diff --git a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
index 88fb6c3..2abc94b 100644
--- a/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/LLVMCConfigurationEmitter.cpp
@@ -116,7 +116,7 @@ bool IsDagEmpty (const DagInit& d) {
 // EscapeVariableName - Escape commas and other symbols not allowed
 // in the C++ variable names. Makes it possible to use options named
 // like "Wa," (useful for prefix options).
-std::string EscapeVariableName(const std::string& Var) {
+std::string EscapeVariableName (const std::string& Var) {
   std::string ret;
   for (unsigned i = 0; i != Var.size(); ++i) {
     char cur_char = Var[i];
@@ -136,6 +136,21 @@ std::string EscapeVariableName(const std::string& Var) {
   return ret;
 }
 
+/// EscapeQuotes - Replace '"' with '\"'.
+std::string EscapeQuotes (const std::string& Var) {
+  std::string ret;
+  for (unsigned i = 0; i != Var.size(); ++i) {
+    char cur_char = Var[i];
+    if (cur_char == '"') {
+      ret += "\\\"";
+    }
+    else {
+      ret.push_back(cur_char);
+    }
+  }
+  return ret;
+}
+
 /// OneOf - Does the input string contain this character?
 bool OneOf(const char* lst, char c) {
   while (*lst) {
@@ -594,7 +609,7 @@ private:
 
   void onHelp (const DagInit& d) {
     CheckNumberOfArguments(d, 1);
-    optDesc_.Help = InitPtrToString(d.getArg(0));
+    optDesc_.Help = EscapeQuotes(InitPtrToString(d.getArg(0)));
   }
 
   void onHidden (const DagInit& d) {
diff --git a/libclamav/c++/llvm/utils/TableGen/TableGen.cpp b/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
index 7c8d288..f20ec00 100644
--- a/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/TableGen.cpp
@@ -22,6 +22,7 @@
 #include "CodeEmitterGen.h"
 #include "DAGISelEmitter.h"
 #include "DisassemblerEmitter.h"
+#include "EDEmitter.h"
 #include "FastISelEmitter.h"
 #include "InstrEnumEmitter.h"
 #include "InstrInfoEmitter.h"
@@ -58,6 +59,7 @@ enum ActionType {
   GenIntrinsic,
   GenTgtIntrinsic,
   GenLLVMCConf,
+  GenEDHeader, GenEDInfo,
   PrintEnums
 };
 
@@ -106,6 +108,10 @@ namespace {
                                "Generate Clang diagnostic groups"),
                     clEnumValN(GenLLVMCConf, "gen-llvmc",
                                "Generate LLVMC configuration library"),
+                    clEnumValN(GenEDHeader, "gen-enhanced-disassembly-header",
+                               "Generate enhanced disassembly info header"),
+                    clEnumValN(GenEDInfo, "gen-enhanced-disassembly-info",
+                               "Generate enhanced disassembly info"),
                     clEnumValN(PrintEnums, "print-enums",
                                "Print enum values for a class"),
                     clEnumValEnd));
@@ -259,6 +265,12 @@ int main(int argc, char **argv) {
     case GenLLVMCConf:
       LLVMCConfigurationEmitter(Records).run(*Out);
       break;
+    case GenEDHeader:
+      EDEmitter(Records).runHeader(*Out);
+      break;
+    case GenEDInfo:
+      EDEmitter(Records).run(*Out);
+      break;
     case PrintEnums:
     {
       std::vector<Record*> Recs = Records.getAllDerivedDefinitions(Class);
diff --git a/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp
index 2b6e30d..3843e56 100644
--- a/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp
+++ b/libclamav/c++/llvm/utils/TableGen/X86RecognizableInstr.cpp
@@ -24,6 +24,18 @@
 
 using namespace llvm;
 
+#define MRM_MAPPING     \
+  MAP(C1, 33)           \
+  MAP(C2, 34)           \
+  MAP(C3, 35)           \
+  MAP(C4, 36)           \
+  MAP(C8, 37)           \
+  MAP(C9, 38)           \
+  MAP(E8, 39)           \
+  MAP(F0, 40)           \
+  MAP(F8, 41)		\
+  MAP(F9, 42)
+
 // A clone of X86 since we can't depend on something that is generated.
 namespace X86Local {
   enum {
@@ -38,7 +50,12 @@ namespace X86Local {
     MRM4r = 20, MRM5r = 21, MRM6r = 22, MRM7r = 23,
     MRM0m = 24, MRM1m = 25, MRM2m = 26, MRM3m = 27,
     MRM4m = 28, MRM5m = 29, MRM6m = 30, MRM7m = 31,
-    MRMInitReg  = 32
+    MRMInitReg  = 32,
+    
+#define MAP(from, to) MRM_##from = to,
+    MRM_MAPPING
+#undef MAP
+    lastMRM
   };
   
   enum {
@@ -47,10 +64,28 @@ namespace X86Local {
     D8 = 3, D9 = 4, DA = 5, DB = 6,
     DC = 7, DD = 8, DE = 9, DF = 10,
     XD = 11,  XS = 12,
-    T8 = 13,  TA = 14
+    T8 = 13,  P_TA = 14,
+    P_0F_AE = 16, P_0F_01 = 17
   };
 }
-  
+
+// If rows are added to the opcode extension tables, then corresponding entries
+// must be added here.  
+//
+// If the row corresponds to a single byte (i.e., 8f), then add an entry for
+// that byte to ONE_BYTE_EXTENSION_TABLES.
+//
+// If the row corresponds to two bytes where the first is 0f, add an entry for 
+// the second byte to TWO_BYTE_EXTENSION_TABLES.
+//
+// If the row corresponds to some other set of bytes, you will need to modify
+// the code in RecognizableInstr::emitDecodePath() as well, and add new prefixes
+// to the X86 TD files, except in two cases: if the first two bytes of such a 
+// new combination are 0f 38 or 0f 3a, you just have to add maps called
+// THREE_BYTE_38_EXTENSION_TABLES and THREE_BYTE_3A_EXTENSION_TABLES and add a
+// switch(Opcode) just below the case X86Local::T8: or case X86Local::TA: line
+// in RecognizableInstr::emitDecodePath().
+
 #define ONE_BYTE_EXTENSION_TABLES \
   EXTENSION_TABLE(80)             \
   EXTENSION_TABLE(81)             \
@@ -81,10 +116,6 @@ namespace X86Local {
   EXTENSION_TABLE(b9)             \
   EXTENSION_TABLE(ba)             \
   EXTENSION_TABLE(c7)
-  
-#define TWO_BYTE_FULL_EXTENSION_TABLES \
-  EXTENSION_TABLE(01)
-  
 
 using namespace X86Disassembler;
 
@@ -402,13 +433,10 @@ void RecognizableInstr::emitInstructionSpecifier(DisassemblerTables &tables) {
   
   for (operandIndex = 0; operandIndex < numOperands; ++operandIndex) {
     if (OperandList[operandIndex].Constraints.size()) {
-      const std::string &constraint = OperandList[operandIndex].Constraints[0];
-      std::string::size_type tiedToPos;
-
-      if ((tiedToPos = constraint.find(" << 16) | (1 << TOI::TIED_TO))")) !=
-         constraint.npos) {
-        tiedToPos--;
-        operandMapping[operandIndex] = constraint[tiedToPos] - '0';
+      const CodeGenInstruction::ConstraintInfo &Constraint =
+        OperandList[operandIndex].Constraints[0];
+      if (Constraint.isTied()) {
+        operandMapping[operandIndex] = Constraint.getTiedOperand();
       } else {
         ++numPhysicalOperands;
         operandMapping[operandIndex] = operandIndex;
@@ -552,36 +580,10 @@ void RecognizableInstr::emitInstructionSpecifier(DisassemblerTables &tables) {
 void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
   // Special cases where the LLVM tables are not complete
 
-#define EXACTCASE(class, name, lastbyte)         \
-  if (Name == name) {                           \
-    tables.setTableFields(class,                 \
-                          insnContext(),         \
-                          Opcode,               \
-                          ExactFilter(lastbyte), \
-                          UID);                 \
-    Spec->modifierBase = Opcode;               \
-    return;                                      \
-  } 
-
-  EXACTCASE(TWOBYTE, "MONITOR",  0xc8)
-  EXACTCASE(TWOBYTE, "MWAIT",    0xc9)
-  EXACTCASE(TWOBYTE, "SWPGS",    0xf8)
-  EXACTCASE(TWOBYTE, "INVEPT",   0x80)
-  EXACTCASE(TWOBYTE, "INVVPID",  0x81)
-  EXACTCASE(TWOBYTE, "VMCALL",   0xc1)
-  EXACTCASE(TWOBYTE, "VMLAUNCH", 0xc2)
-  EXACTCASE(TWOBYTE, "VMRESUME", 0xc3)
-  EXACTCASE(TWOBYTE, "VMXOFF",   0xc4)
-
-  if (Name == "INVLPG") {
-    tables.setTableFields(TWOBYTE,
-                          insnContext(),
-                          Opcode,
-                          ExtendedFilter(false, 7),
-                          UID);
-    Spec->modifierBase = Opcode;
-    return;
-  }
+#define MAP(from, to)                     \
+  case X86Local::MRM_##from:              \
+    filter = new ExactFilter(0x##from);   \
+    break;
 
   OpcodeType    opcodeType  = (OpcodeType)-1;
   
@@ -596,6 +598,12 @@ void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
     opcodeType = TWOBYTE;
 
     switch (Opcode) {
+    default:
+      if (needsModRMForDecode(Form))
+        filter = new ModFilter(isRegFormat(Form));
+      else
+        filter = new DumbFilter();
+      break;
 #define EXTENSION_TABLE(n) case 0x##n:
     TWO_BYTE_EXTENSION_TABLES
 #undef EXTENSION_TABLE
@@ -622,16 +630,10 @@ void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
       case X86Local::MRM7m:
         filter = new ExtendedFilter(false, Form - X86Local::MRM0m);
         break;
+      MRM_MAPPING
       } // switch (Form)
       break;
-    default:
-      if (needsModRMForDecode(Form))
-        filter = new ModFilter(isRegFormat(Form));
-      else
-        filter = new DumbFilter();
-        
-      break;
-    } // switch (opcode)
+    } // switch (Opcode)
     opcodeToSet = Opcode;
     break;
   case X86Local::T8:
@@ -642,7 +644,7 @@ void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
       filter = new DumbFilter();
     opcodeToSet = Opcode;
     break;
-  case X86Local::TA:
+  case X86Local::P_TA:
     opcodeType = THREEBYTE_3A;
     if (needsModRMForDecode(Form))
       filter = new ModFilter(isRegFormat(Form));
@@ -699,6 +701,7 @@ void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
       case X86Local::MRM7m:
         filter = new ExtendedFilter(false, Form - X86Local::MRM0m);
         break;
+      MRM_MAPPING
       } // switch (Form)
       break;
     case 0xd8:
@@ -763,6 +766,8 @@ void RecognizableInstr::emitDecodePath(DisassemblerTables &tables) const {
   }
   
   delete filter;
+  
+#undef MAP
 }
 
 #define TYPE(str, type) if (s == str) return type;
diff --git a/libclamav/c++/llvm/utils/UpdateCMakeLists.pl b/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
index 6d24d90..8f53514 100755
--- a/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
+++ b/libclamav/c++/llvm/utils/UpdateCMakeLists.pl
@@ -68,7 +68,8 @@ sub UpdateCMake {
   while(<IN>) {
     if (!$foundLibrary) {
       print OUT $_;
-      if (/^add_clang_library\(/ || /^add_llvm_library\(/ || /^add_llvm_target\(/) {
+      if (/^add_clang_library\(/ || /^add_llvm_library\(/ || /^add_llvm_target\(/
+          || /^add_executable\(/) {
         $foundLibrary = 1;
         EmitCMakeList($dir);
       }
diff --git a/libclamav/c++/llvm/utils/lit/lit/ShUtil.py b/libclamav/c++/llvm/utils/lit/lit/ShUtil.py
index c4bbb3d..c8f9332 100644
--- a/libclamav/c++/llvm/utils/lit/lit/ShUtil.py
+++ b/libclamav/c++/llvm/utils/lit/lit/ShUtil.py
@@ -66,7 +66,7 @@ class ShLexer:
                 return (tok[0], num)                    
             elif c == '"':
                 self.eat()
-                str += self.lex_arg_quoted('"')
+                str += self.lex_arg_quoted('"')
             elif not self.win32Escapes and c == '\\':
                 # Outside of a string, '\\' escapes everything.
                 self.eat()
diff --git a/libclamav/c++/llvm/utils/lit/lit/TestFormats.py b/libclamav/c++/llvm/utils/lit/lit/TestFormats.py
index 5dfd54a..d87a467 100644
--- a/libclamav/c++/llvm/utils/lit/lit/TestFormats.py
+++ b/libclamav/c++/llvm/utils/lit/lit/TestFormats.py
@@ -9,13 +9,19 @@ class GoogleTest(object):
         self.test_sub_dir = str(test_sub_dir)
         self.test_suffix = str(test_suffix)
 
-    def getGTestTests(self, path, litConfig):
+    def getGTestTests(self, path, litConfig, localConfig):
         """getGTestTests(path) - [name]
-        
-        Return the tests available in gtest executable."""
+
+        Return the tests available in gtest executable.
+
+        Args:
+          path: String path to a gtest executable
+          litConfig: LitConfig instance
+          localConfig: TestingConfig instance"""
 
         try:
-            lines = Util.capture([path, '--gtest_list_tests']).split('\n')
+            lines = Util.capture([path, '--gtest_list_tests'],
+                                 env=localConfig.environment).split('\n')
         except:
             litConfig.error("unable to discover google-tests in %r" % path)
             raise StopIteration
@@ -52,7 +58,8 @@ class GoogleTest(object):
                     execpath = os.path.join(filepath, subfilename)
 
                     # Discover the tests in this executable.
-                    for name in self.getGTestTests(execpath, litConfig):
+                    for name in self.getGTestTests(execpath, litConfig,
+                                                   localConfig):
                         testPath = path_in_suite + (filename, subfilename, name)
                         yield Test.Test(testSuite, testPath, localConfig)
 
@@ -65,7 +72,8 @@ class GoogleTest(object):
             testName = os.path.join(namePrefix, testName)
 
         cmd = [testPath, '--gtest_filter=' + testName]
-        out, err, exitCode = TestRunner.executeCommand(cmd)
+        out, err, exitCode = TestRunner.executeCommand(
+            cmd, env=test.config.environment)
             
         if not exitCode:
             return Test.PASS,''
@@ -79,6 +87,10 @@ class FileBasedTest(object):
                             litConfig, localConfig):
         source_path = testSuite.getSourcePath(path_in_suite)
         for filename in os.listdir(source_path):
+            # Ignore dot files.
+            if filename.startswith('.'):
+                continue
+
             filepath = os.path.join(source_path, filename)
             if not os.path.isdir(filepath):
                 base,ext = os.path.splitext(filename)
@@ -129,7 +141,8 @@ class OneCommandPerFileTest:
                               d not in localConfig.excludes)]
 
             for filename in filenames:
-                if (not self.pattern.match(filename) or
+                if (filename.startswith('.') or
+                    not self.pattern.match(filename) or
                     filename in localConfig.excludes):
                     continue
 
diff --git a/libclamav/c++/llvm/utils/lit/lit/Util.py b/libclamav/c++/llvm/utils/lit/lit/Util.py
index 66c5e46..414b714 100644
--- a/libclamav/c++/llvm/utils/lit/lit/Util.py
+++ b/libclamav/c++/llvm/utils/lit/lit/Util.py
@@ -39,11 +39,12 @@ def mkdir_p(path):
         if e.errno != errno.EEXIST:
             raise
 
-def capture(args):
+def capture(args, env=None):
     import subprocess
     """capture(command) - Run the given command (or argv list) in a shell and
     return the standard output."""
-    p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+    p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
+                         env=env)
     out,_ = p.communicate()
     return out
 
diff --git a/libclamav/c++/llvm/utils/llvm.grm b/libclamav/c++/llvm/utils/llvm.grm
index 4499d4b..86a707a 100644
--- a/libclamav/c++/llvm/utils/llvm.grm
+++ b/libclamav/c++/llvm/utils/llvm.grm
@@ -161,6 +161,7 @@ FuncAttr      ::= noreturn
  | signext
  | readnone
  | readonly
+ | inlinehint
  | noinline
  | alwaysinline
  | optsize
diff --git a/libclamav/c++/llvm/utils/vim/llvm.vim b/libclamav/c++/llvm/utils/vim/llvm.vim
index 48a4c68..6e4a207 100644
--- a/libclamav/c++/llvm/utils/vim/llvm.vim
+++ b/libclamav/c++/llvm/utils/vim/llvm.vim
@@ -51,7 +51,7 @@ syn keyword llvmKeyword volatile fastcc coldcc cc ccc
 syn keyword llvmKeyword x86_stdcallcc x86_fastcallcc
 syn keyword llvmKeyword signext zeroext inreg sret nounwind noreturn
 syn keyword llvmKeyword nocapture byval nest readnone readonly noalias
-syn keyword llvmKeyword noinline alwaysinline optsize ssp sspreq
+syn keyword llvmKeyword inlinehint noinline alwaysinline optsize ssp sspreq
 syn keyword llvmKeyword noredzone noimplicitfloat naked
 syn keyword llvmKeyword module asm align tail to
 syn keyword llvmKeyword addrspace section alias sideeffect c gc
diff --git a/libclamav/c++/strip-llvm.sh b/libclamav/c++/strip-llvm.sh
index 92f0244..1f37a61 100755
--- a/libclamav/c++/strip-llvm.sh
+++ b/libclamav/c++/strip-llvm.sh
@@ -23,7 +23,7 @@ for i in llvm/bindings/ llvm/examples/ llvm/projects/ llvm/runtime/\
     llvm/tools/llvm-extract llvm/tools/llvm-ld llvm/tools/llvm-link llvm/tools/llvm-mc\
     llvm/tools/llvm-nm llvm/tools/llvm-prof llvm/tools/llvm-ranlib\
     llvm/tools/llvm-stub llvm/tools/lto llvm/tools/opt llvm/lib/MC/MCParser\
-    llvm/tools/llvm-dis/Makefile
+    llvm/tools/llvm-dis/Makefile llvm/include/llvm/MC/MCParser
     do
 	git rm -rf $i;
 done

-- 
Debian repository for ClamAV



More information about the Pkg-clamav-commits mailing list